Sample records for high level computational

  1. High school computer science education paves the way for higher education: the Israeli case

    NASA Astrophysics Data System (ADS)

    Armoni, Michal; Gal-Ezer, Judith

    2014-07-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to computer science in high school and pursuing computing in higher education. We also examine the gender gap, in the context of high school computer science education. We show that in Israel, students who took the high-level computer science matriculation exam were more likely to pursue computing in higher education. Regarding the issue of gender, we will show that, in general, in Israel the difference between males and females who take computer science in high school is relatively small, and a larger, though still not very large difference exists only for the highest exam level. In addition, exposing females to high-level computer science in high school has more relative impact on pursuing higher education in computing.

  2. High level language for measurement complex control based on the computer E-100I

    NASA Technical Reports Server (NTRS)

    Zubkov, B. V.

    1980-01-01

    A high level language was designed to control the process of conducting an experiment using the computer "Elektrinika-1001". Program examples are given to control the measuring and actuating devices. The procedure of including these programs in the suggested high level language is described.

  3. Factors influencing exemplary science teachers' levels of computer use

    NASA Astrophysics Data System (ADS)

    Hakverdi, Meral

    This study examines exemplary science teachers' use of technology in science instruction, factors influencing their level of computer use, their level of knowledge/skills in using specific computer applications for science instruction, their use of computer-related applications/tools during their instruction, and their students' use of computer applications/tools in or for their science class. After a relevant review of the literature certain variables were selected for analysis. These variables included personal self-efficacy in teaching with computers, outcome expectancy, pupil-control ideology, level of computer use, age, gender, teaching experience, personal computer use, professional computer use and science teachers' level of knowledge/skills in using specific computer applications for science instruction. The sample for this study includes middle and high school science teachers who received the Presidential Award for Excellence in Science Teaching Award (sponsored by the White House and the National Science Foundation) between the years 1997 and 2003 from all 50 states and U.S. territories. Award-winning science teachers were contacted about the survey via e-mail or letter with an enclosed return envelope. Of the 334 award-winning science teachers, usable responses were received from 92 science teachers, which made a response rate of 27.5%. Analysis of the survey responses indicated that exemplary science teachers have a variety of knowledge/skills in using computer related applications/tools. The most commonly used computer applications/tools are information retrieval via the Internet, presentation tools, online communication, digital cameras, and data collection probes. Results of the study revealed that students' use of technology in their science classroom is highly correlated with the frequency of their science teachers' use of computer applications/tools. The results of the multiple regression analysis revealed that personal self-efficacy related to the exemplary science teachers' level of computer use suggesting that computer use is dependent on perceived abilities at using computers. The teachers' use of computer-related applications/tools during class, and their personal self-efficacy, age, and gender are highly related with their level of knowledge/skills in using specific computer applications for science instruction. The teachers' level of knowledge/skills in using specific computer applications for science instruction and gender related to their use of computer-related applications/tools during class and the students' use of computer-related applications/tools in or for their science class. In conclusion, exemplary science teachers need assistance in learning and using computer-related applications/tool in their science class.

  4. Users matter : multi-agent systems model of high performance computing cluster users.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, M. J.; Hood, C. S.; Decision and Information Sciences

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less

  5. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  6. ProjectQ Software Framework

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Haener, Thomas; Troyer, Matthias

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.

  7. The Relationship between Internet and Computer Game Addiction Level and Shyness among High School Students

    ERIC Educational Resources Information Center

    Ayas, Tuncay

    2012-01-01

    This study is conducted to determine the relationship between the internet and computer games addiction level and the shyness among high school students. The participants of the study consist of 365 students attending high schools in Giresun city centre during 2009-2010 academic year. As a result of the study a positive, meaningful, and high…

  8. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  9. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  10. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  11. Connectionist Models and Parallelism in High Level Vision.

    DTIC Science & Technology

    1985-01-01

    GRANT NUMBER(s) Jerome A. Feldman N00014-82-K-0193 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENt. PROJECT, TASK Computer Science...Connectionist Models 2.1 Background and Overviev % Computer science is just beginning to look seriously at parallel computation : it may turn out that...the chair. The program includes intermediate level networks that compute more complex joints and ones that compute parallelograms in the image. These

  12. High-Level Data-Abstraction System

    NASA Technical Reports Server (NTRS)

    Fishwick, P. A.

    1986-01-01

    Communication with data-base processor flexible and efficient. High Level Data Abstraction (HILDA) system is three-layer system supporting data-abstraction features of Intel data-base processor (DBP). Purpose of HILDA establishment of flexible method of efficiently communicating with DBP. Power of HILDA lies in its extensibility with regard to syntax and semantic changes. HILDA's high-level query language readily modified. Offers powerful potential to computer sites where DBP attached to DEC VAX-series computer. HILDA system written in Pascal and FORTRAN 77 for interactive execution.

  13. Computer Self-Efficacy among Senior High School Teachers in Ghana and the Functionality of Demographic Variables on Their Computer Self-Efficacy

    ERIC Educational Resources Information Center

    Sarfo, Frederick Kwaku; Amankwah, Francis; Konin, Daniel

    2017-01-01

    The study is aimed at investigating 1) the level of computer self-efficacy among public senior high school (SHS) teachers in Ghana and 2) the functionality of teachers' age, gender, and computer experiences on their computer self-efficacy. Four hundred and Seven (407) SHS teachers were used for the study. The "Computer Self-Efficacy"…

  14. Cyber Foraging for Improving Survivability of Mobile Systems

    DTIC Science & Technology

    2016-02-10

    environments—such as dynamic context, limited computing resources, disconnected- intermittent - limited (DIL) network connectivity, and high levels of stress...environments, such as dynamic context, limited computing resources, disconnected- intermittent -limited (DIL) network connectivity, and high levels of...Table 1: Mapping of Cloudlet Features to Survivability Requirements Threats Intermittent Cloudlet- Enterprise Connectivity Mobility Limited

  15. Students' Viewpoint of Computer Game for Training in Indonesian Universities and High Schools

    ERIC Educational Resources Information Center

    Wahyudin, Didin; Hasegawa, Shinobu; Kamaludin, Apep

    2017-01-01

    This paper describes the survey--conducted in Indonesian universities (UNIV) and high schools (HS)--whose concern is to examine preferences and influences of computer game for training. Comparing the students' viewpoint between both educational levels could determine which educational level would satisfy the need of MAGNITUDE--mobile serious game…

  16. YASS: A System Simulator for Operating System and Computer Architecture Teaching and Learning

    ERIC Educational Resources Information Center

    Mustafa, Besim

    2013-01-01

    A highly interactive, integrated and multi-level simulator has been developed specifically to support both the teachers and the learners of modern computer technologies at undergraduate level. The simulator provides a highly visual and user configurable environment with many pedagogical features aimed at facilitating deep understanding of concepts…

  17. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  18. The paradigm compiler: Mapping a functional language for the connection machine

    NASA Technical Reports Server (NTRS)

    Dennis, Jack B.

    1989-01-01

    The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.

  19. High-School Students' Reasoning while Constructing Plant Growth Models in a Computer-Supported Educational Environment. Research Report

    ERIC Educational Resources Information Center

    Ergazaki, Marida; Komis, Vassilis; Zogza, Vassiliki

    2005-01-01

    This paper highlights specific aspects of high-school students' reasoning while coping with a modeling task of plant growth in a computer-supported educational environment. It is particularly concerned with the modeling levels ('macro-phenomenological' and 'micro-conceptual' level) activated by peers while exploring plant growth and with their…

  20. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    PubMed Central

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931

  1. Modeling Materials: Design for Planetary Entry, Electric Aircraft, and Beyond

    NASA Technical Reports Server (NTRS)

    Thompson, Alexander; Lawson, John W.

    2014-01-01

    NASA missions push the limits of what is possible. The development of high-performance materials must keep pace with the agency's demanding, cutting-edge applications. Researchers at NASA's Ames Research Center are performing multiscale computational modeling to accelerate development times and further the design of next-generation aerospace materials. Multiscale modeling combines several computationally intensive techniques ranging from the atomic level to the macroscale, passing output from one level as input to the next level. These methods are applicable to a wide variety of materials systems. For example: (a) Ultra-high-temperature ceramics for hypersonic aircraft-we utilized the full range of multiscale modeling to characterize thermal protection materials for faster, safer air- and spacecraft, (b) Planetary entry heat shields for space vehicles-we computed thermal and mechanical properties of ablative composites by combining several methods, from atomistic simulations to macroscale computations, (c) Advanced batteries for electric aircraft-we performed large-scale molecular dynamics simulations of advanced electrolytes for ultra-high-energy capacity batteries to enable long-distance electric aircraft service; and (d) Shape-memory alloys for high-efficiency aircraft-we used high-fidelity electronic structure calculations to determine phase diagrams in shape-memory transformations. Advances in high-performance computing have been critical to the development of multiscale materials modeling. We used nearly one million processor hours on NASA's Pleiades supercomputer to characterize electrolytes with a fidelity that would be otherwise impossible. For this and other projects, Pleiades enables us to push the physics and accuracy of our calculations to new levels.

  2. Office workers' computer use patterns are associated with workplace stressors.

    PubMed

    Eijckelhof, Belinda H W; Huysmans, Maaike A; Blatter, Birgitte M; Leider, Priscilla C; Johnson, Peter W; van Dieën, Jaap H; Dennerlein, Jack T; van der Beek, Allard J

    2014-11-01

    This field study examined associations between workplace stressors and office workers' computer use patterns. We collected keyboard and mouse activities of 93 office workers (68F, 25M) for approximately two work weeks. Linear regression analyses examined the associations between self-reported effort, reward, overcommitment, and perceived stress and software-recorded computer use duration, number of short and long computer breaks, and pace of input device usage. Daily duration of computer use was, on average, 30 min longer for workers with high compared to low levels of overcommitment and perceived stress. The number of short computer breaks (30 s-5 min long) was approximately 20% lower for those with high compared to low effort and for those with low compared to high reward. These outcomes support the hypothesis that office workers' computer use patterns vary across individuals with different levels of workplace stressors. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  3. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  4. Interactomes to Biological Phase Space: a call to begin thinking at a new level in computational biology.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, George S.; Brown, William Michael

    2007-09-01

    Techniques for high throughput determinations of interactomes, together with high resolution protein collocalizations maps within organelles and through membranes will soon create a vast resource. With these data, biological descriptions, akin to the high dimensional phase spaces familiar to physicists, will become possible. These descriptions will capture sufficient information to make possible realistic, system-level models of cells. The descriptions and the computational models they enable will require powerful computing techniques. This report is offered as a call to the computational biology community to begin thinking at this scale and as a challenge to develop the required algorithms and codes tomore » make use of the new data.3« less

  5. Computational Fluency Performance Profile of High School Students with Mathematics Disabilities

    ERIC Educational Resources Information Center

    Calhoon, Mary Beth; Emerson, Robert Wall; Flores, Margaret; Houchins, David E.

    2007-01-01

    The purpose of this descriptive study was to develop a computational fluency performance profile of 224 high school (Grades 9-12) students with mathematics disabilities (MD). Computational fluency performance was examined by grade-level expectancy (Grades 2-6) and skill area (whole numbers: addition, subtraction, multiplication, division;…

  6. Formal specification of human-computer interfaces

    NASA Technical Reports Server (NTRS)

    Auernheimer, Brent

    1990-01-01

    A high-level formal specification of a human computer interface is described. Previous work is reviewed and the ASLAN specification language is described. Top-level specifications written in ASLAN for a library and a multiwindow interface are discussed.

  7. Quantum Accelerators for High-performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less

  8. High-Threshold Fault-Tolerant Quantum Computation with Analog Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Fukui, Kosuke; Tomita, Akihisa; Okamoto, Atsushi; Fujii, Keisuke

    2018-04-01

    To implement fault-tolerant quantum computation with continuous variables, the Gottesman-Kitaev-Preskill (GKP) qubit has been recognized as an important technological element. However, it is still challenging to experimentally generate the GKP qubit with the required squeezing level, 14.8 dB, of the existing fault-tolerant quantum computation. To reduce this requirement, we propose a high-threshold fault-tolerant quantum computation with GKP qubits using topologically protected measurement-based quantum computation with the surface code. By harnessing analog information contained in the GKP qubits, we apply analog quantum error correction to the surface code. Furthermore, we develop a method to prevent the squeezing level from decreasing during the construction of the large-scale cluster states for the topologically protected, measurement-based, quantum computation. We numerically show that the required squeezing level can be relaxed to less than 10 dB, which is within the reach of the current experimental technology. Hence, this work can considerably alleviate this experimental requirement and take a step closer to the realization of large-scale quantum computation.

  9. Fostering Girls' Computer Literacy through Laptop Learning: Can Mobile Computers Help To Level Out the Gender Difference?

    ERIC Educational Resources Information Center

    Schaumburg, Heike

    The goal of this study was to find out if the difference between boys and girls in computer literacy can be leveled out in a laptop program where each student has his/her own mobile computer to work with at home and at school. Ninth grade students (n=113) from laptop and non-laptop classes in a German high school were tested for their computer…

  10. Student Perceived Importance and Correlations of Selected Computer Literacy Course Topics

    ERIC Educational Resources Information Center

    Ciampa, Mark

    2013-01-01

    Traditional college-level courses designed to teach computer literacy are in a state of flux. Today's students have high rates of access to computing technology and computer ownership, leading many policy decision makers to conclude that students already are computer literate and thus computer literacy courses are dinosaurs in a modern digital…

  11. The Gender Factor in Computer Anxiety and Interest among Some Australian High School Students.

    ERIC Educational Resources Information Center

    Okebukola, Peter Akinsola

    1993-01-01

    Western Australia eleventh graders (142 boys, 139 girls) were compared on such variables as computers at home, computer classes, experience with computers, and socioeconomic status. Girls had higher anxiety levels, boys higher computer interest. Possible causes included social beliefs about computer use, teacher sex bias, and software (games) more…

  12. High Level Technology in a Low Level Mathematics Course.

    ERIC Educational Resources Information Center

    Schultz, James E.; Noguera, Norma

    2000-01-01

    Describes a teaching experiment in which spreadsheets and computer algebra systems were used to teach a low-level college consumer mathematics course. Students were successful in using different types of functions to solve a variety of problems drawn from real-world situations. Provides an existence proof that computer algebra systems can assist…

  13. Identify Skills and Proficiency Levels Necessary for Entry-Level Employment for All Vocational Programs Using Computers to Process Data. Final Report.

    ERIC Educational Resources Information Center

    Crowe, Jacquelyn

    This study investigated computer and word processing operator skills necessary for employment in today's high technology office. The study was comprised of seven major phases: (1) identification of existing community college computer operator programs in the state of Washington; (2) attendance at an information management seminar; (3) production…

  14. Equal Time for Women.

    ERIC Educational Resources Information Center

    Kolata, Gina

    1984-01-01

    Examines social influences which discourage women from pursuing studies in computer science, including monopoly of computer time by boys at the high school level, sexual harassment in college, movies, and computer games. Describes some initial efforts to encourage females of all ages to study computer science. (JM)

  15. Universal computer test stand (recommended computer test requirements). [for space shuttle computer evaluation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Techniques are considered which would be used to characterize areospace computers with the space shuttle application as end usage. The system level digital problems which have been encountered and documented are surveyed. From the large cross section of tests, an optimum set is recommended that has a high probability of discovering documented system level digital problems within laboratory environments. Defined is a baseline hardware, software system which is required as a laboratory tool to test aerospace computers. Hardware and software baselines and additions necessary to interface the UTE to aerospace computers for test purposes are outlined.

  16. Computer Languages: A Practical Guide to the Chief Programming Languages.

    ERIC Educational Resources Information Center

    Sanderson, Peter C.

    All the most commonly-used high-level computer languages are discussed in this book. An introductory discussion provides an overview of the basic components of a digital computer, the general planning of a computer programing problem, and the various types of computer languages. Each chapter is self-contained, emphasizes those features of a…

  17. A visual programming environment for the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl; Crockett, Thomas W.; Middleton, David

    1988-01-01

    The Navier-Stokes computer is a high-performance, reconfigurable, pipelined machine designed to solve large computational fluid dynamics problems. Due to the complexity of the architecture, development of effective, high-level language compilers for the system appears to be a very difficult task. Consequently, a visual programming methodology has been developed which allows users to program the system at an architectural level by constructing diagrams of the pipeline configuration. These schematic program representations can then be checked for validity and automatically translated into machine code. The visual environment is illustrated by using a prototype graphical editor to program an example problem.

  18. FPGA-Based High-Performance Embedded Systems for Adaptive Edge Computing in Cyber-Physical Systems: The ARTICo³ Framework.

    PubMed

    Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo

    2018-06-08

    Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.

  19. PCs and Personal Health.

    ERIC Educational Resources Information Center

    Lombardi, Don

    1991-01-01

    Studies suggest that computer work stations may induce high levels of physical and psychological stress. Advises school districts to take a proactive stance on ergonomics. Cites laws and pending litigation regulating computer use in the workspace and offers guidelines for computer users. (MLF)

  20. When does a physical system compute?

    PubMed

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution . We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  1. When does a physical system compute?

    PubMed Central

    Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv

    2014-01-01

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245

  2. Developing Computer Model-Based Assessment of Chemical Reasoning: A Feasibility Study

    ERIC Educational Resources Information Center

    Liu, Xiufeng; Waight, Noemi; Gregorius, Roberto; Smith, Erica; Park, Mihwa

    2012-01-01

    This paper reports a feasibility study on developing computer model-based assessments of chemical reasoning at the high school level. Computer models are flash and NetLogo environments to make simultaneously available three domains in chemistry: macroscopic, submicroscopic, and symbolic. Students interact with computer models to answer assessment…

  3. Deaf individuals who work with computers present a high level of visual attention.

    PubMed

    Ribeiro, Paula Vieira; Ribas, Valdenilson Ribeiro; Ribas, Renata de Melo Guerra; de Melo, Teresinha de Jesus Oliveira Guimarães; Marinho, Carlos Antonio de Sá; Silva, Kátia Karina do Monte; de Albuquerque, Elizabete Elias; Ribas, Valéria Ribeiro; de Lima, Renata Mirelly Silva; Santos, Tuthcha Sandrelle Botelho Tavares

    2011-01-01

    Some studies in the literature indicate that deaf individuals seem to develop a higher level of attention and concentration during the process of constructing of different ways of communicating. The aim of this study was to evaluate the level of attention in individuals deaf from birth that worked with computers. A total of 161 individuals in the 18-25 age group were assessed. Of these, 40 were congenitally deaf individuals that worked with computers, 42 were deaf individuals that did not work, did not know how to use nor used computers (Control 1), 39 individuals with normal hearing that did not work, did not know how to use computers nor used them (Control 2), and 40 individuals with normal hearing that worked with computers (Control 3). The group of subjects deaf from birth that worked with computers (IDWC) presented a higher level of focused attention, sustained attention, mental manipulation capacity and resistance to interference compared to the control groups. This study highlights the relevance sensory to cognitive processing.

  4. Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers

    NASA Technical Reports Server (NTRS)

    Levitt, K. N.; Schwartz, R.; Hare, D.; Moore, J. S.; Melliar-Smith, P. M.; Shostak, R. E.; Boyer, R. S.; Green, M. W.; Elliott, W. D.

    1983-01-01

    A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic.

  5. Safety of High Speed Ground Transportation Systems : Analytical Methodology for Safety Validation of Computer Controlled Subsystems : Volume 2. Development of a Safety Validation Methodology

    DOT National Transportation Integrated Search

    1995-01-01

    This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety cortical functions in high-speed rail or magnetic levitation ...

  6. ISSM-SESAW v1.0: mesh-based computation of gravitationally consistent sea-level and geodetic signatures caused by cryosphere and climate driven mass change

    NASA Astrophysics Data System (ADS)

    Adhikari, Surendra; Ivins, Erik R.; Larour, Eric

    2016-03-01

    A classical Green's function approach for computing gravitationally consistent sea-level variations associated with mass redistribution on the earth's surface employed in contemporary sea-level models naturally suits the spectral methods for numerical evaluation. The capability of these methods to resolve high wave number features such as small glaciers is limited by the need for large numbers of pixels and high-degree (associated Legendre) series truncation. Incorporating a spectral model into (components of) earth system models that generally operate on a mesh system also requires repetitive forward and inverse transforms. In order to overcome these limitations, we present a method that functions efficiently on an unstructured mesh, thus capturing the physics operating at kilometer scale yet capable of simulating geophysical observables that are inherently of global scale with minimal computational cost. The goal of the current version of this model is to provide high-resolution solid-earth, gravitational, sea-level and rotational responses for earth system models operating in the domain of the earth's outer fluid envelope on timescales less than about 1 century when viscous effects can largely be ignored over most of the globe. The model has numerous important geophysical applications. For example, we compute time-varying computations of global geodetic and sea-level signatures associated with recent ice-sheet changes that are derived from space gravimetry observations. We also demonstrate the capability of our model to simultaneously resolve kilometer-scale sources of the earth's time-varying surface mass transport, derived from high-resolution modeling of polar ice sheets, and predict the corresponding local and global geodetic signatures.

  7. Sulfate radical oxidation of aromatic contaminants: a detailed assessment of density functional theory and high-level quantum chemical methods.

    PubMed

    Pari, Sangavi; Wang, Inger A; Liu, Haizhou; Wong, Bryan M

    2017-03-22

    Advanced oxidation processes that utilize highly oxidative radicals are widely used in water reuse treatment. In recent years, the application of sulfate radical (SO 4 ˙ - ) as a promising oxidant for water treatment has gained increasing attention. To understand the efficiency of SO 4 ˙ - in the degradation of organic contaminants in wastewater effluent, it is important to be able to predict the reaction kinetics of various SO 4 ˙ - -driven oxidation reactions. In this study, we utilize density functional theory (DFT) and high-level wavefunction-based methods (including computationally-intensive coupled cluster methods), to explore the activation energies of SO 4 ˙ - -driven oxidation reactions on a series of benzene-derived contaminants. These high-level calculations encompass a wide set of reactions including 110 forward/reverse reactions and 5 different computational methods in total. Based on the high-level coupled-cluster quantum calculations, we find that the popular M06-2X DFT functional is significantly more accurate for OH - additions than for SO 4 ˙ - reactions. Most importantly, we highlight some of the limitations and deficiencies of other computational methods, and we recommend the use of high-level quantum calculations to spot-check environmental chemistry reactions that may lie outside the training set of the M06-2X functional, particularly for water oxidation reactions that involve SO 4 ˙ - and other inorganic species.

  8. From Clinical Interviews to Policy Recommendations: A Case Study in High School Computer Programming. Study of Stanford and the Schools Technology Panel.

    ERIC Educational Resources Information Center

    Sleeman, D.; Gong, Brian

    In order to determine the knowledge and skills needed by novice programmers to successfully learn computer programming, four studies were conducted using a clinical interview technique. The first study determined that many systematic errors in programming were due to programmers' high-level misconceptions of the nature of the computer and of the…

  9. The association between computer literacy and training on clinical productivity and user satisfaction in using the electronic medical record in Saudi Arabia.

    PubMed

    Alasmary, May; El Metwally, Ashraf; Househ, Mowafa

    2014-08-01

    The association of computer literacy, training on clinical productivity and satisfaction of a recently implemented Electronic Medical Record (EMR) system in Prince Sultan Medical Military City ((PSMMC)) was investigated. The scope of this study was to explore the association between age, occupation and computer literacy and clinical productivity and users' satisfaction of the newly implemented EMR at PSMMC as well as the association of user satisfaction with age and position. A self-administrated questionnaire was distributed to all doctors and nurses working in Alwazarat Family and Community Center (a Health center in PSMMC). A convenience sample size of 112 healthcare providers (65 Nurses and 47 physicians) completed the questionnaire. A combination of correlation, One Way ANOVA and t-tests were used to answer the research questions. Participants had high levels of self-reported literacy on computers and satisfaction of the system. Both levels were higher among physicians than among nurses. A moderate but significant (at p < 0.01 level) correlation was found between computer literacy and users' satisfaction towards the system (R = 0.343). Age was weakly, but significantly (at p < 0.05), positively correlated with satisfaction with the system (R = 0.29). Self-reported system productivity and satisfaction was statistically correlated at p < 0.01 (R = 0.509). High level of satisfaction with training on using the system was not positively correlated with overall satisfaction of using the system. This study demonstrated that EMR users with high computer literacy skills were more satisfied with using the EMR than users with low computer literacy skills.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grote, D. P.

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.

  11. Coastal Storm Hazards from Virginia to Maine

    DTIC Science & Technology

    2015-11-01

    study, storm surge, tide, waves, wind, atmospheric pressure, and currents were the dominant storm responses computed. The effect of sea level change on...coastal storm hazards and vulnerability nationally (USACE 2015). NACCS goals also included evaluating the effect of future sea level change (SLC) on...the computed high-fidelity responses included storm surge, astronomical tide, waves, wave effects on water levels, storm duration, wind, currents

  12. Greek Undergraduate Physical Education Students' Basic Computer Skills

    ERIC Educational Resources Information Center

    Adamakis, Manolis; Zounhia, Katerina

    2013-01-01

    The purposes of this study were to determine how undergraduate physical education (PE) students feel about their level of competence concerning basic computer skills and to examine possible differences between groups (gender, specialization, high school graduation type, and high school direction). Although many students and educators believe…

  13. Computer-Mediated Communication as Experienced by Korean Women Students in US Higher Education

    ERIC Educational Resources Information Center

    Baek, Mikyung; Damarin, Suzanne K.

    2008-01-01

    Having grown up in an age of rapidly developing electronic communication technology, today's students come to higher education with high levels of comfort and familiarity with computer-mediated communication (CMC, hereafter). The students' level of comfort with CMC, coupled with CMC's promises of enabling supplemental class discussion as well as…

  14. DECUS Proceedings; Fall 1971, Papers and Presentations.

    ERIC Educational Resources Information Center

    1971

    Papers and presentations at the 1971 symposium of the Digital Equipment Computer Users Society (DECUS) are presented. The papers deal with medical and physiological applications, computer graphics, simulation education, small computer executive systems, management information tools, data acquisition systems, and high level languages. Although many…

  15. Cloud@Home: A New Enhanced Computing Paradigm

    NASA Astrophysics Data System (ADS)

    Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco

    Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).

  16. Development of Intelligent Computer-Assisted Instruction Systems to Facilitate Reading Skills of Learning-Disabled Children

    DTIC Science & Technology

    1993-12-01

    Unclassified/Unlimited 13. ABSTRACT ~Maximum 2W0 worr*J The purpose of this thesis is to develop a high-level model to create seli"adapting software which...Department of Computer Science ABSTRACT The purpose of this thesis is to develop a high-level model to create self-adapting software which teaches learning...stimulating and demanding. The power of the system model described herein is that it can vary as needed by the individual student. The system will

  17. Deaf individuals who work with computers present a high level of visual attention

    PubMed Central

    Ribeiro, Paula Vieira; Ribas, Valdenilson Ribeiro; Ribas, Renata de Melo Guerra; de Melo, Teresinha de Jesus Oliveira Guimarães; Marinho, Carlos Antonio de Sá; Silva, Kátia Karina do Monte; de Albuquerque, Elizabete Elias; Ribas, Valéria Ribeiro; de Lima, Renata Mirelly Silva; Santos, Tuthcha Sandrelle Botelho Tavares

    2011-01-01

    Some studies in the literature indicate that deaf individuals seem to develop a higher level of attention and concentration during the process of constructing of different ways of communicating. Objective The aim of this study was to evaluate the level of attention in individuals deaf from birth that worked with computers. Methods A total of 161 individuals in the 18-25 age group were assessed. Of these, 40 were congenitally deaf individuals that worked with computers, 42 were deaf individuals that did not work, did not know how to use nor used computers (Control 1), 39 individuals with normal hearing that did not work, did not know how to use computers nor used them (Control 2), and 40 individuals with normal hearing that worked with computers (Control 3). Results The group of subjects deaf from birth that worked with computers (IDWC) presented a higher level of focused attention, sustained attention, mental manipulation capacity and resistance to interference compared to the control groups. Conclusion This study highlights the relevance sensory to cognitive processing. PMID:29213734

  18. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    NASA Astrophysics Data System (ADS)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  19. Improving self-regulated learning junior high school students through computer-based learning

    NASA Astrophysics Data System (ADS)

    Nurjanah; Dahlan, J. A.

    2018-05-01

    This study is back grounded by the importance of self-regulated learning as an affective aspect that determines the success of students in learning mathematics. The purpose of this research is to see how the improvement of junior high school students' self-regulated learning through computer based learning is reviewed in whole and school level. This research used a quasi-experimental research method. This is because individual sample subjects are not randomly selected. The research design used is Pretest-and-Posttest Control Group Design. Subjects in this study were students of grade VIII junior high school in Bandung taken from high school (A) and middle school (B). The results of this study showed that the increase of the students' self-regulated learning who obtain learning with computer-based learning is higher than students who obtain conventional learning. School-level factors have a significant effect on increasing of the students' self-regulated learning.

  20. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vineyard, Craig Michael; Verzi, Stephen Joseph

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilizemore » memory.« less

  1. Self-Directed Student Research through Analysis of Microarray Datasets: A Computer-Based Functional Genomics Practical Class for Masters-Level Students

    ERIC Educational Resources Information Center

    Grenville-Briggs, Laura J.; Stansfield, Ian

    2011-01-01

    This report describes a linked series of Masters-level computer practical workshops. They comprise an advanced functional genomics investigation, based upon analysis of a microarray dataset probing yeast DNA damage responses. The workshops require the students to analyse highly complex transcriptomics datasets, and were designed to stimulate…

  2. RPython high-level synthesis

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Linczuk, Maciej

    2016-09-01

    The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.

  3. Relevance feedback-based building recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Allinson, Nigel M.

    2010-07-01

    Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.

  4. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    DOE PAGES

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; ...

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less

  5. Analytical methodology for safety validation of computer controlled subsystems. Volume 1 : state-of-the-art and assessment of safety verification/validation methodologies

    DOT National Transportation Integrated Search

    1995-09-01

    This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety critical functions in high-speed rail or magnetic levitation ...

  6. High Speed Computing, LANs, and WAMs

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A.; Monacos, Steve

    1994-01-01

    Optical fiber networks may one day offer potential capacities exceeding 10 terabits/sec. This paper describes present gigabit network techniques for distributed computing as illustrated by the CASA gigabit testbed, and then explores future all-optic network architectures that offer increased capacity, more optimized level of service for a given application, high fault tolerance, and dynamic reconfigurability.

  7. The presence of mathematics and computer anxiety in nursing students and their effects on medication dosage calculations.

    PubMed

    Glaister, Karen

    2007-05-01

    To determine if the presence of mathematical and computer anxiety in nursing students affects learning of dosage calculations. The quasi-experimental study compared learning outcomes at differing levels of mathematical and computer anxiety when integrative and computer based learning approaches were used. Participants involved a cohort of second year nursing students (n=97). Mathematical anxiety exists in 20% (n=19) of the student nurse population, and 14% (n=13) experienced mathematical testing anxiety. Those students more anxious about mathematics and the testing of mathematics benefited from integrative learning to develop conditional knowledge (F(4,66)=2.52 at p<.05). Computer anxiety was present in 12% (n=11) of participants, with those reporting medium and high levels of computer anxiety performing less well than those with low levels (F(1,81)=3.98 at p<.05). Instructional strategies need to account for the presence of mathematical and computer anxiety when planning an educational program to develop competency in dosage calculations.

  8. The Importance of Outreach Programs to Unblock the Pipeline and Broaden Diversity in ICT Education

    ERIC Educational Resources Information Center

    Lang, Catherine; Craig, Annemieke; Egan, Mary Anne

    2016-01-01

    There is a need for outreach programs to attract a diverse range of students to the computing discipline. The lack of qualified computing graduates to fill the growing number of computing vacancies is of concern to government and industry and there are few female students entering the computing pipeline at high school level. This paper presents…

  9. High-resolution subgrid models: background, grid generation, and implementation

    NASA Astrophysics Data System (ADS)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  10. What Is a Programming Language?

    ERIC Educational Resources Information Center

    Wold, Allen

    1983-01-01

    Explains what a computer programing language is in general, the differences between machine language, assembler languages, and high-level languages, and the functions of compilers and interpreters. High-level languages mentioned in the article are: BASIC, FORTRAN, COBOL, PILOT, LOGO, LISP, and SMALLTALK. (EAO)

  11. The Differences in Motivations of Online Game Players and Offline Game Players: A Combined Analysis of Three Studies at Higher Education Level

    ERIC Educational Resources Information Center

    Hainey, Tom; Connolly, Thomas; Stansfield, Mark; Boyle, Elizabeth

    2011-01-01

    Computer games have become a highly popular form of entertainment and have had a large impact on how University students spend their leisure time. Due to their highly motivating properties computer games have come to the attention of educationalists who wish to exploit these highly desirable properties for educational purposes. Several studies…

  12. Educational interactive multimedia software: The impact of interactivity on learning

    NASA Astrophysics Data System (ADS)

    Reamon, Derek Trent

    This dissertation discusses the design, development, deployment and testing of two versions of educational interactive multimedia software. Both versions of the software are focused on teaching mechanical engineering undergraduates about the fundamentals of direct-current (DC) motor physics and selection. The two versions of Motor Workshop software cover the same basic materials on motors, but differ in the level of interactivity between the students and the software. Here, the level of interactivity refers to the particular role of the computer in the interaction between the user and the software. In one version, the students navigate through information that is organized by topic, reading text, and viewing embedded video clips; this is referred to as "low-level interactivity" software because the computer simply presents the content. In the other version, the students are given a task to accomplish---they must design a small motor-driven 'virtual' vehicle that competes against computer-generated opponents. The interaction is guided by the software which offers advice from 'experts' and provides contextual information; we refer to this as "high-level interactivity" software because the computer is actively participating in the interaction. The software was used in two sets of experiments, where students using the low-level interactivity software served as the 'control group,' and students using the highly interactive software were the 'treatment group.' Data, including pre- and post-performance tests, questionnaire responses, learning style characterizations, activity tracking logs and videotapes were collected for analysis. Statistical and observational research methods were applied to the various data to test the hypothesis that the level of interactivity effects the learning situation, with higher levels of interactivity being more effective for learning. The results show that both the low-level and high-level interactive versions of the software were effective in promoting learning about the subject of motors. The focus of learning varied between users of the two versions, however. The low-level version was more effective for teaching concepts and terminology, while the high-level version seemed to be more effective for teaching engineering applications.

  13. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Patrick

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  14. Collaboration Levels in Asynchronous Discussion Forums: A Social Network Analysis Approach

    ERIC Educational Resources Information Center

    Luhrs, Cecilia; McAnally-Salas, Lewis

    2016-01-01

    Computer Supported Collaborative Learning literature relates high levels of collaboration to enhanced learning outcomes. However, an agreement on what is considered a high level of collaboration is unclear, especially if a qualitative approach is taken. This study describes how methods of Social Network Analysis were used to design a collaboration…

  15. Fault tolerant computer control for a Maglev transportation system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George

    1994-01-01

    Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.

  16. BigData and computing challenges in high energy and nuclear physics

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-06-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''

  17. Levels of Questioning and Forms of Feedback: Instructional Factors in Courseware Design.

    ERIC Educational Resources Information Center

    Merrill, John

    High and low level questions as determined by a panel of evaluators were combined with corrective feedback and attribute isolation feedback to form four versions of a computer-based science lesson. The sample consisted of 154 high school chemistry students in a suburban high school. The primary hypothesis was that students who received high level…

  18. High-precision arithmetic in mathematical physics

    DOE PAGES

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  19. Learning Density in Vanuatu High School with Computer Simulation: Influence of Different Levels of Guidance

    ERIC Educational Resources Information Center

    Moli, Lemuel; Delserieys, Alice Pedregosa; Impedovo, Maria Antonietta; Castera, Jeremy

    2017-01-01

    This paper presents a study on discovery learning of scientific concepts with the support of computer simulation. In particular, the paper will focus on the effect of the levels of guidance on students with a low degree of experience in informatics and educational technology. The first stage of this study was to identify the common misconceptions…

  20. Computer-Aided Design of Drugs on Emerging Hybrid High Performance Computers

    DTIC Science & Technology

    2013-09-01

    solutions to virtualization include lightweight, user-level implementations on Linux operating systems , but these solutions are often dependent on a...virtualization include lightweight, user-level implementations on Linux operating systems , but these solutions are often dependent on a specific version of...Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302

  1. Risk factors of non-specific neck pain and low back pain in computer-using office workers in China: a cross-sectional study

    PubMed Central

    Jing, Qinglei; Wei, Chen; Lu, Jie

    2017-01-01

    Objectives Several studies have found that inappropriate workstations are associated with musculoskeletal disorders. The present cross-sectional study aimed to identify the risk factors of non-specific neck pain (NP) and low back pain (LBP) among computer-using workers. Design Observational study with a cross-sectional sample. Setting This study surveyed 15 companies in Zhejiang province, China. Participants After excluding participants with missing variables, 417 office workers, including 163 men and 254 women, were analyzed. Outcome measures Demographic information was collected by self-report. The standard Northwick Park Neck Pain Questionnaire and Oswestry Low Back Pain Disability Index, along with other relevant questions, were used to assess the presence of potential occupational risk factors and the perceived levels of pain. Multinomial logistic regression analysis, adjusted for age, sex, body mass index, education, marital status and neck/low back injury, was performed to identify significant risk factors. Results Compared with low-level NP, the computer location (monitor not in front of the operator, but on the right or left side) was associated with ORs of 2.6 and 2.9 for medium- and high-level NP, respectively. For LBP, the computer location (monitor not in front) was associated with an OR of 3.2 for high-level pain, as compared with low-level pain, in females. Significant associations were also observed between the office temperature and LBP (OR 5.4 for high vs low), and between office work duration ≥5 years and NP in female office workers (OR 2.7 for medium vs low). Conclusions Not having the computer monitor located in front of the operator was found to be an important risk factor for NP and LBP in computer-using female workers. This information may not only enable the development of potential preventive strategies but may also provide new insights for designing appropriate workstations. PMID:28404613

  2. Interactive Supercomputing’s Star-P Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelman, Alan; Husbands, Parry; Leibman, Steve

    2006-09-19

    The thesis of this extended abstract is simple. High productivity comes from high level infrastructures. To measure this, we introduce a methodology that goes beyond the tradition of timing software in serial and tuned parallel modes. We perform a classroom productivity study involving 29 students who have written a homework exercise in a low level language (MPI message passing) and a high level language (Star-P with MATLAB client). Our conclusions indicate what perhaps should be of little surprise: (1) the high level language is always far easier on the students than the low level language. (2) The early versions ofmore » the high level language perform inadequately compared to the tuned low level language, but later versions substantially catch up. Asymptotically, the analogy must hold that message passing is to high level language parallel programming as assembler is to high level environments such as MATLAB, Mathematica, Maple, or even Python. We follow the Kepner method that correctly realizes that traditional speedup numbers without some discussion of the human cost of reaching these numbers can fail to reflect the true human productivity cost of high performance computing. Traditional data compares low level message passing with serial computation. With the benefit of a high level language system in place, in our case Star-P running with MATLAB client, and with the benefit of a large data pool: 29 students, each running the same code ten times on three evolutions of the same platform, we can methodically demonstrate the productivity gains. To date we are not aware of any high level system as extensive and interoperable as Star-P, nor are we aware of an experiment of this kind performed with this volume of data.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication andmore » kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.« less

  4. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  5. Negotiating Pragmatic Competence in Computer Mediated Communication: The Case of Korean Address Terms

    ERIC Educational Resources Information Center

    Kim, Eun Young; Brown, Lucien

    2014-01-01

    This paper examines how L2 learners of Korean manifest pragmatic competence in their use of address terms in computer mediated communication (CMC) and how they use these terms to negotiate their identities. Four UK-based learners of Korean with competence levels ranging from Novice High through Intermediate High participated in the study,…

  6. Allocation of Resources to Computer Support in Two-Year Colleges: 1979-80.

    ERIC Educational Resources Information Center

    Arth, Maurice P.

    1982-01-01

    Data on levels of computer-related expenditures for two-year colleges are presented. The data show institutions whether their computer-related expenditures, as a percentage of total operating expenditures and in dollars per credit headcount student, are high, medium, or low relative to expenditures of other similarly sized institutions. (MLW)

  7. The Impact of Software on Associate Degree Programs in Electronic Engineering Technology.

    ERIC Educational Resources Information Center

    Hata, David M.

    1986-01-01

    Assesses the range and extent of computer assisted instruction software available in electronic engineering technology education. Examines the need for software skills in four areas: (1) high-level languages; (2) assembly language; (3) computer-aided engineering; and (4) computer-aided instruction. Outlines strategies for the future in three…

  8. Teaching Hackers: School Computing Culture and the Future of Cyber-Rights.

    ERIC Educational Resources Information Center

    Van Buren, Cassandra

    2001-01-01

    Discussion of the need for ethical computing strategies and policies at the K-12 level to acculturate computer hackers away from malicious network hacking focuses on a three-year participant observation ethnographic study conducted at the New Technology High School (California) that examined the school's attempts to socialize its hackers to act…

  9. O1.3. A COMPUTATIONAL TRIAL-BY-TRIAL EEG ANALYSIS OF HIERARCHICAL PRECISION-WEIGHTED PREDICTION ERRORS

    PubMed Central

    Tomiello, Sara; Schöbi, Dario; Weber, Lilian; Haker, Helene; Sandra, Iglesias; Stephan, Klaas Enno

    2018-01-01

    Abstract Background Action optimisation relies on learning about past decisions and on accumulated knowledge about the stability of the environment. In Bayesian models of learning, belief updating is informed by multiple, hierarchically related, precision-weighted prediction errors (pwPEs). Recent work suggests that hierarchically different pwPEs may be encoded by specific neurotransmitters such as dopamine (DA) and acetylcholine (ACh). Abnormal dopaminergic and cholinergic modulation of N-methyl-D-aspartate (NMDA) receptors plays a central role in the dysconnection hypothesis, which considers impaired synaptic plasticity a central mechanisms in the pathophysiology of schizophrenia. Methods To probe the dichotomy between DA and ACh and to investigate timing parameters of pwPEs, we tested 74 healthy male volunteers performing a probabilistic reward associative learning task in which the contingency between cues and rewards changed over 160 trials between 0.8 and 0.2. Furthermore, the current study employed pharmacological interventions (amisulpride / biperiden / placebo) and genetic analyses (COMT and ChAT) to probe DA and ACh modulation of these computational quantities. The study was double-blind and between-subject. We inferred, from subject-specific behavioural data, a low-level choice PE about the reward outcome, a high-level PE about the probability of the outcome as well as the respective precision-weights (uncertainties) and used them, in a trial-by-trial analysis, to explain electroencephalogram (EEG) signals (64 channels). Behavioural data was modelled implementing three versions of the Hierarchical Gaussian Filter (HGF), a Rescorla-Wagner model, and a Sutton model with a dynamic learning rate. The computational trajectories of the winning model were used as regressors in single-subject trial-by-trial GLM analyses at the sensor level. The resulting parameter estimates were entered into 2nd-level ANOVAs. The reported results were family-wise error corrected at the peak-level (p<0.05) across the whole brain and time window (outcome phase: 0 - 500ms). Results A three-level HGF best explained the data and was used to compute the computational regressors for EEG analyses. We found a significant interaction between pharmacology and COMT for the high-level precision-weight (uncertainty). Specifically: - At 276 ms after outcome presentation the difference between Met/Met and Val/Met was more positive for amisulpride than for biperiden over occipital electrodes. - At 274ms and 278 ms after outcome presentation the difference between Met/Met and Val/Met was more negative over fronto-temporal electrodes for amisulpride than for placebo, and for amisulpride than for biperiden, respectively. No significant results were detected for the other computational quantities or for the ChAT gene. Discussion The differential effects of pharmacology on the processing of high-level precision-weight (uncertainty) were modulated by the DA-related gene COMT. Previous results linked high-level PEs to the cholinergic basal forebrain. One possible explanation for the current results is that high-level computational quantities are represented in cholinergic regions, which in turn are influenced by dopaminergic projections. In order to disentangle dopaminergic and cholinergic effects on synaptic plasticity further analyses will concentrate on biophysical models (e.g. DCM). This may prove useful in detecting pathophysiological subgroups and might therefore be of high relevance in a clinical setting.

  10. Expectations of Competency: The Mismatch between Employers' and Graduates' Views of End-User Computing Skills Requirements in the Workplace

    ERIC Educational Resources Information Center

    Gibbs, Shirley; Steel, Gary; Kuiper, Alison

    2011-01-01

    The use of computers has become part of everyday life. The high prevalence of computer use appears to lead employers to assume that university graduates will have the good computing skills necessary in many graduate level jobs. This study investigates how well the expectations of employers match the perceptions of near-graduate students about the…

  11. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    NASA Technical Reports Server (NTRS)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  12. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    PubMed

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  13. A comparative approach to closed-loop computation.

    PubMed

    Roth, E; Sponberg, S; Cowan, N J

    2014-04-01

    Neural computation is inescapably closed-loop: the nervous system processes sensory signals to shape motor output, and motor output consequently shapes sensory input. Technological advances have enabled neuroscientists to close, open, and alter feedback loops in a wide range of experimental preparations. The experimental capability of manipulating the topology-that is, how information can flow between subsystems-provides new opportunities to understand the mechanisms and computations underlying behavior. These experiments encompass a spectrum of approaches from fully open-loop, restrained preparations to the fully closed-loop character of free behavior. Control theory and system identification provide a clear computational framework for relating these experimental approaches. We describe recent progress and new directions for translating experiments at one level in this spectrum to predictions at another level. Operating across this spectrum can reveal new understanding of how low-level neural mechanisms relate to high-level function during closed-loop behavior. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Federal Plan for High-End Computing. Report of the High-End Computing Revitalization Task Force (HECRTF)

    DTIC Science & Technology

    2004-07-01

    steadily for the past fifteen years, while memory latency and bandwidth have improved much more slowly. For example, Intel processor clock rates38 have... processor and memory performance) all greatly restrict the ability to achieve high levels of performance for science, engineering, and national...sub-nuclear distances. Guide experiments to identify transition from quantum chromodynamics to quark -gluon plasma. Accelerator Physics Accurate

  15. Parallel algorithms for large-scale biological sequence alignment on Xeon-Phi based clusters.

    PubMed

    Lan, Haidong; Chan, Yuandong; Xu, Kai; Schmidt, Bertil; Peng, Shaoliang; Liu, Weiguo

    2016-07-19

    Computing alignments between two or more sequences are common operations frequently performed in computational molecular biology. The continuing growth of biological sequence databases establishes the need for their efficient parallel implementation on modern accelerators. This paper presents new approaches to high performance biological sequence database scanning with the Smith-Waterman algorithm and the first stage of progressive multiple sequence alignment based on the ClustalW heuristic on a Xeon Phi-based compute cluster. Our approach uses a three-level parallelization scheme to take full advantage of the compute power available on this type of architecture; i.e. cluster-level data parallelism, thread-level coarse-grained parallelism, and vector-level fine-grained parallelism. Furthermore, we re-organize the sequence datasets and use Xeon Phi shuffle operations to improve I/O efficiency. Evaluations show that our method achieves a peak overall performance up to 220 GCUPS for scanning real protein sequence databanks on a single node consisting of two Intel E5-2620 CPUs and two Intel Xeon Phi 7110P cards. It also exhibits good scalability in terms of sequence length and size, and number of compute nodes for both database scanning and multiple sequence alignment. Furthermore, the achieved performance is highly competitive in comparison to optimized Xeon Phi and GPU implementations. Our implementation is available at https://github.com/turbo0628/LSDBS-mpi .

  16. Computer-based learning in neuroanatomy: A longitudinal study of learning, transfer, and retention

    NASA Astrophysics Data System (ADS)

    Chariker, Julia H.

    A longitudinal experiment was conducted to explore computer-based learning of neuroanatomy. Using a realistic 3D graphical model of neuroanatomy, and sections derived from the model, exploratory graphical tools were integrated into interactive computer programs so as to allow adaptive exploration. 72 participants learned either sectional anatomy alone or learned whole anatomy followed by sectional anatomy. Sectional anatomy was explored either in perceptually continuous animation or discretely, as in the use of an anatomical atlas. Learning was measured longitudinally to a high performance criterion. After learning, transfer to biomedical images and long-term retention was tested. Learning whole anatomy prior to learning sectional anatomy led to a more efficient learning experience. Learners demonstrated high levels of transfer from whole anatomy to sectional anatomy and from sectional anatomy to complex biomedical images. All learning groups demonstrated high levels of retention at 2--3 weeks.

  17. Cake: Enabling High-level SLOs on Shared Storage Systems

    DTIC Science & Technology

    2012-11-07

    Cake: Enabling High-level SLOs on Shared Storage Systems Andrew Wang Shivaram Venkataraman Sara Alspaugh Randy H. Katz Ion Stoica Electrical...Date) * * * * * * * Professor R. Katz Second Reader (Date) Cake: Enabling High-level SLOs on Shared Storage Systems Andrew Wang, Shivaram Venkataraman ...Report MIT-LCS-TR-667, MIT, Laboratory for Computer Science, 1995. [39] A. Wang, S. Venkataraman , S. Alspaugh, I. Stoica, and R. Katz. Sweet storage SLOs

  18. SEMICONDUCTOR INTEGRATED CIRCUITS: A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    NASA Astrophysics Data System (ADS)

    Jizhi, Liu; Xingbi, Chen

    2009-12-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate.

  19. Computational Analysis of a Prototype Martian Rotorcraft Experiment

    NASA Technical Reports Server (NTRS)

    Corfeld, Kelly J.; Strawn, Roger C.; Long, Lyle N.

    2002-01-01

    This paper presents Reynolds-averaged Navier-Stokes calculations for a prototype Martian rotorcraft. The computations are intended for comparison with an ongoing Mars rotor hover test at NASA Ames Research Center. These computational simulations present a new and challenging problem, since rotors that operate on Mars will experience a unique low Reynolds number and high Mach number environment. Computed results for the 3-D rotor differ substantially from 2-D sectional computations in that the 3-D results exhibit a stall delay phenomenon caused by rotational forces along the blade span. Computational results have yet to be compared to experimental data, but computed performance predictions match the experimental design goals fairly well. In addition, the computed results provide a high level of detail in the rotor wake and blade surface aerodynamics. These details provide an important supplement to the expected experimental performance data.

  20. The Julia programming language: the future of scientific computing

    NASA Astrophysics Data System (ADS)

    Gibson, John

    2017-11-01

    Julia is an innovative new open-source programming language for high-level, high-performance numerical computing. Julia combines the general-purpose breadth and extensibility of Python, the ease-of-use and numeric focus of Matlab, the speed of C and Fortran, and the metaprogramming power of Lisp. Julia uses type inference and just-in-time compilation to compile high-level user code to machine code on the fly. A rich set of numeric types and extensive numerical libraries are built-in. As a result, Julia is competitive with Matlab for interactive graphical exploration and with C and Fortran for high-performance computing. This talk interactively demonstrates Julia's numerical features and benchmarks Julia against C, C++, Fortran, Matlab, and Python on a spectral time-stepping algorithm for a 1d nonlinear partial differential equation. The Julia code is nearly as compact as Matlab and nearly as fast as Fortran. This material is based upon work supported by the National Science Foundation under Grant No. 1554149.

  1. Transient Three-Dimensional Side Load Analysis of a Film Cooled Nozzle

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Guidos, Mike

    2008-01-01

    Transient three-dimensional numerical investigations on the side load physics for an engine encompassing a film cooled nozzle extension and a regeneratively cooled thrust chamber, were performed. The objectives of this study are to identify the three-dimensional side load physics and to compute the associated aerodynamic side load using an anchored computational methodology. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and a transient inlet history based on an engine system simulation. Ultimately, the computational results will be provided to the nozzle designers for estimating of effect of the peak side load on the nozzle structure. Computations simulating engine startup at ambient pressures corresponding to sea level and three high altitudes were performed. In addition, computations for both engine startup and shutdown transients were also performed for a stub nozzle, operating at sea level. For engine with the full nozzle extension, computational result shows starting up at sea level, the peak side load occurs when the lambda shock steps into the turbine exhaust flow, while the side load caused by the transition from free-shock separation to restricted-shock separation comes at second; and the side loads decreasing rapidly and progressively as the ambient pressure decreases. For the stub nozzle operating at sea level, the computed side loads during both startup and shutdown becomes very small due to the much reduced flow area.

  2. Relationship between Young Children's Habitual Computer Use and Influencing Variables on Socio-Emotional Development

    ERIC Educational Resources Information Center

    Seo, Hyun Ah; Chun, Hui Young; Jwa, Seung Hwa; Choi, Mi Hyun

    2011-01-01

    This study investigates the relationship between young children's habitual computer use and influencing variables on socio-emotional development. The participants were 179 five-year-old children. The Internet Addiction Scale for Young Children (IASYC) was used to identify children with high and low levels of habituation to computer use. The data…

  3. Numerical Simulation of Rolling-Airframes Using a Multi-Level Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A supersonic rolling missile with two synchronous canard control surfaces is analyzed using an automated, inviscid, Cartesian method. Sequential-static and time-dependent dynamic simulations of the complete motion are computed for canard dither schedules for level flight, pitch, and yaw maneuver. The dynamic simulations are compared directly against both high-resolution viscous simulations and relevant experimental data, and are also utilized to compute dynamic stability derivatives. The results show that both the body roll rate and canard dither motion influence the roll-averaged forces and moments on the body. At the relatively, low roll rates analyzed in the current work these dynamic effects are modest, however the dynamic computations are effective in predicting the dynamic stability derivatives which can be significant for highly-maneuverable missiles.

  4. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  5. Computational and fMRI Studies of Visualization

    DTIC Science & Technology

    2009-03-31

    spatial thinking in high level cognition, such as in problem-solving and reasoning. In conjunction with the experimental work, the project developed a...computational modeling system (4CAPS) as well as the development of 4CAPS models for particular tasks. The cognitive level of 4CAPS accounts for...neuroarchitecture to interpret and predict the brain activation in a network of cortical areas that underpin the performance of a visual thinking task. The

  6. Applications of massively parallel computers in telemetry processing

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; Pritchard, Jim; Knoble, Gordon

    1994-01-01

    Telemetry processing refers to the reconstruction of full resolution raw instrumentation data with artifacts, of space and ground recording and transmission, removed. Being the first processing phase of satellite data, this process is also referred to as level-zero processing. This study is aimed at investigating the use of massively parallel computing technology in providing level-zero processing to spaceflights that adhere to the recommendations of the Consultative Committee on Space Data Systems (CCSDS). The workload characteristics, of level-zero processing, are used to identify processing requirements in high-performance computing systems. An example of level-zero functions on a SIMD MPP, such as the MasPar, is discussed. The requirements in this paper are based in part on the Earth Observing System (EOS) Data and Operation System (EDOS).

  7. Comparative Implementation of High Performance Computing for Power System Dynamic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng

    Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less

  8. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  9. A preliminary evaluation of nearhore extreme sea level and wave models for fringing reef environments

    NASA Astrophysics Data System (ADS)

    Hoeke, R. K.; Reyns, J.; O'Grady, J.; Becker, J. M.; Merrifield, M. A.; Roelvink, J. A.

    2016-02-01

    Oceanic islands are widely perceived as vulnerable to sea level rise and are characterized by steep nearshore topography and fringing reefs. In such settings, near shore dynamics and (non-tidal) water level variability tends to be dominated by wind-wave processes. These processes are highly sensitive to reef morphology and roughness and to regional wave climate. Thus sea level extremes tend to be highly localized and their likelihood can be expected to change in the future (beyond simple extrapolation of sea level rise scenarios): e.g. sea level rise may increase the effective mean depth of reef crests and flats and ocean acidification and/or increased temperatures may lead to changes in reef structure. The problem is sufficiently complex that analytic or numerical approaches are necessary to estimate current hazards and explore potential future changes. In this study, we evaluate the capacity of several analytic/empirical approaches and phase-averaged and phase-resolved numerical models at sites in the insular tropical Pacific. We consider their ability to predict time-averaged wave setup and instantaneous water level exceedance probability (or dynamic wave run-up) as well as computational cost; where possible, we compare the model results with in situ observations from a number of previous studies. Preliminary results indicate analytic approaches are by far the most computationally efficient, but tend to perform poorly when alongshore straight and parallel morphology cannot be assumed. Phase-averaged models tend to perform well with respect to wave setup in such situations, but are unable to predict processes related to individual waves or wave groups, such as infragravity motions or wave run-up. Phase-resolved models tend to perform best, but come at high computational cost, an important consideration when exploring possible future scenarios. A new approach of combining an unstructured computational grid with a quasi-phase averaged approach (i.e. only phase resolving motions below a frequency cutoff) shows promise as a good compromise between computational efficiency and resolving processes such as wave runup and overtopping in more complex bathymetric situations.

  10. USSR and Eastern Europe Scientific Abstracts, Cybernetics, Computers, and Automation Technology, Number 26

    DTIC Science & Technology

    1977-01-26

    Sisteme Matematicheskogo Obespecheniya YeS EVM [ Applied Programs in the Software System for the Unified System of Computers], by A. Ye. Fateyev, A. I...computerized systems are most effective in large production complexes , in which the level of utilization of computers can be as high as 500,000...performance of these tasks could be furthered by the complex introduction of electronic computers in automated control systems. The creation of ASU

  11. A First Attempt to Bring Computational Biology into Advanced High School Biology Classrooms

    PubMed Central

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S.

    2011-01-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors. PMID:22046118

  12. A first attempt to bring computational biology into advanced high school biology classrooms.

    PubMed

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S

    2011-10-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  13. High School Students' Written Argumentation Qualities with Problem-Based Computer-Aided Material (PBCAM) Designed about Human Endocrine System

    ERIC Educational Resources Information Center

    Vekli, Gülsah Sezen; Çimer, Atilla

    2017-01-01

    This study investigated development of students' scientific argumentation levels in the applications made with Problem-Based Computer-Aided Material (PBCAM) designed about Human Endocrine System. The case study method was used: The study group was formed of 43 students in the 11th grade of the science high school in Rize. Human Endocrine System…

  14. Effect of Computer-Delivered Testing on Achievement in a Mastery Learning Course of Study with Partial Scoring and Variable Pacing.

    ERIC Educational Resources Information Center

    Evans, Richard M.; Surkan, Alvin J.

    The recent arrival of portable computer systems with high-level language interpreters now makes it practical to rapidly develop complex testing and scoring programs. These programs permit undergraduates access, at arbitrary times, to testing as an integral part of a mastery learning strategy. Effects of introducing the computer were studied by…

  15. Effect of Physical Education Teachers' Computer Literacy on Technology Use in Physical Education

    ERIC Educational Resources Information Center

    Kretschmann, Rolf

    2015-01-01

    Teachers' computer literacy has been identified as a factor that determines their technology use in class. The aim of this study was to investigate the relationship between physical education (PE) teachers' computer literacy and their technology use in PE. The study group consisted of 57 high school level in-service PE teachers. A survey was used…

  16. Research on Young Women in Computer Science: Promoting High Technology for Girls.

    ERIC Educational Resources Information Center

    Crombie, Gail

    When the public school system of Ontario, Canada, began offering an all-female computer science course for girls in grade 11, female enrollment in computer science increased to approximately 40%. This increased enrollment level has been maintained for 3 years. The new course's effects on girls' attitudes were examined in a survey of 184 grade 11…

  17. Biology Teacher and Expert Opinions about Computer Assisted Biology Instruction Materials: A Software Entitled Nucleic Acids and Protein Synthesis

    ERIC Educational Resources Information Center

    Hasenekoglu, Ismet; Timucin, Melih

    2007-01-01

    The aim of this study is to collect and evaluate opinions of CAI experts and biology teachers about a high school level Computer Assisted Biology Instruction Material presenting computer-made modelling and simulations. It is a case study. A material covering "Nucleic Acids and Protein Synthesis" topic was developed as the…

  18. High School Students Learning University Level Computer Science on the Web: A Case Study of the "DASK"-Model

    ERIC Educational Resources Information Center

    Grandell, Linda

    2005-01-01

    Computer science is becoming increasingly important in our society. Meta skills, such as problem solving and logical and algorithmic thinking, are emphasized in every field, not only in the natural sciences. Still, largely due to gaps in tuition, common misunderstandings exist about the true nature of computer science. These are especially…

  19. Thermal/structural modeling of a large scale in situ overtest experiment for defense high level waste at the Waste Isolation Pilot Plant Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, H.S.; Stone, C.M.; Krieg, R.D.

    Several large scale in situ experiments in bedded salt formations are currently underway at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, USA. In these experiments, the thermal and creep responses of salt around several different underground room configurations are being measured. Data from the tests are to be compared to thermal and structural responses predicted in pretest reference calculations. The purpose of these comparisons is to evaluate computational models developed from laboratory data prior to fielding of the in situ experiments. In this paper, the computational models used in the pretest reference calculation for one of themore » large scale tests, The Overtest for Defense High Level Waste, are described; and the pretest computed thermal and structural responses are compared to early data from the experiment. The comparisons indicate that computed and measured temperatures for the test agree to within ten percent but that measured deformation rates are between two and three times greater than corresponsing computed rates. 10 figs., 3 tabs.« less

  20. A comprehensive pipeline for multi-resolution modeling of the mitral valve: Validation, computational efficiency, and predictive capability.

    PubMed

    Drach, Andrew; Khalighi, Amir H; Sacks, Michael S

    2018-02-01

    Multiple studies have demonstrated that the pathological geometries unique to each patient can affect the durability of mitral valve (MV) repairs. While computational modeling of the MV is a promising approach to improve the surgical outcomes, the complex MV geometry precludes use of simplified models. Moreover, the lack of complete in vivo geometric information presents significant challenges in the development of patient-specific computational models. There is thus a need to determine the level of detail necessary for predictive MV models. To address this issue, we have developed a novel pipeline for building attribute-rich computational models of MV with varying fidelity directly from the in vitro imaging data. The approach combines high-resolution geometric information from loaded and unloaded states to achieve a high level of anatomic detail, followed by mapping and parametric embedding of tissue attributes to build a high-resolution, attribute-rich computational models. Subsequent lower resolution models were then developed and evaluated by comparing the displacements and surface strains to those extracted from the imaging data. We then identified the critical levels of fidelity for building predictive MV models in the dilated and repaired states. We demonstrated that a model with a feature size of about 5 mm and mesh size of about 1 mm was sufficient to predict the overall MV shape, stress, and strain distributions with high accuracy. However, we also noted that more detailed models were found to be needed to simulate microstructural events. We conclude that the developed pipeline enables sufficiently complex models for biomechanical simulations of MV in normal, dilated, repaired states. Copyright © 2017 John Wiley & Sons, Ltd.

  1. High School Mathematics Teachers' Levels of Achieving Technology Integration and In-Class Reflections: The Case of Mathematica

    ERIC Educational Resources Information Center

    Ardiç, Mehmet Alper; Isleyen, Tevfik

    2017-01-01

    The purpose of this study is to determine the levels of high school mathematics teachers in achieving mathematics instruction via computer algebra systems and the reflections of these practices in the classroom. Three high school mathematics teachers employed at different types of school participated in the study. In the beginning of this…

  2. Bridging the digital divide through the integration of computer and information technology in science education: An action research study

    NASA Astrophysics Data System (ADS)

    Brown, Gail Laverne

    The presence of a digital divide, computer and information technology integration effectiveness, and barriers to continued usage of computer and information technology were investigated. Thirty-four African American and Caucasian American students (17 males and 17 females) in grades 9--11 from 2 Georgia high school science classes were exposed to 30 hours of hands-on computer and information technology skills. The purpose of the exposure was to improve students' computer and information technology skills. Pre-study and post-study skills surveys, and structured interviews were used to compare race, gender, income, grade-level, and age differences with respect to computer usage. A paired t-test and McNemar test determined mean differences between student pre-study and post-study perceived skills levels. The results were consistent with findings of the National Telecommunications and Information Administration (2000) that indicated the presence of a digital divide and digital inclusion. Caucasian American participants were found to have more at-home computer and Internet access than African American participants, indicating that there is a digital divide by ethnicity. Caucasian American females were found to have more computer and Internet access which was an indication of digital inclusion. Sophomores had more at-home computer access and Internet access than other levels indicating digital inclusion. Students receiving regular meals had more computer and Internet access than students receiving free/reduced meals. Older students had more computer and Internet access than younger students. African American males had been using computer and information technology the longest which is an indication of inclusion. The paired t-test and McNemar test revealed significant perceived student increases in all skills levels. Interviews did not reveal any barriers to continued usage of the computer and information technology skills.

  3. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  4. Risk factors of non-specific neck pain and low back pain in computer-using office workers in China: a cross-sectional study.

    PubMed

    Ye, Sunyue; Jing, Qinglei; Wei, Chen; Lu, Jie

    2017-04-11

    Several studies have found that inappropriate workstations are associated with musculoskeletal disorders. The present cross-sectional study aimed to identify the risk factors of non-specific neck pain (NP) and low back pain (LBP) among computer-using workers. Observational study with a cross-sectional sample. This study surveyed 15 companies in Zhejiang province, China. After excluding participants with missing variables, 417 office workers, including 163 men and 254 women, were analyzed. Demographic information was collected by self-report. The standard Northwick Park Neck Pain Questionnaire and Oswestry Low Back Pain Disability Index, along with other relevant questions, were used to assess the presence of potential occupational risk factors and the perceived levels of pain. Multinomial logistic regression analysis, adjusted for age, sex, body mass index, education, marital status and neck/low back injury, was performed to identify significant risk factors. Compared with low-level NP, the computer location (monitor not in front of the operator, but on the right or left side) was associated with ORs of 2.6 and 2.9 for medium- and high-level NP, respectively. For LBP, the computer location (monitor not in front) was associated with an OR of 3.2 for high-level pain, as compared with low-level pain, in females. Significant associations were also observed between the office temperature and LBP (OR 5.4 for high vs low), and between office work duration ≥5 years and NP in female office workers (OR 2.7 for medium vs low). Not having the computer monitor located in front of the operator was found to be an important risk factor for NP and LBP in computer-using female workers. This information may not only enable the development of potential preventive strategies but may also provide new insights for designing appropriate workstations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Hierarchical parallelisation of functional renormalisation group calculations - hp-fRG

    NASA Astrophysics Data System (ADS)

    Rohe, Daniel

    2016-10-01

    The functional renormalisation group (fRG) has evolved into a versatile tool in condensed matter theory for studying important aspects of correlated electron systems. Practical applications of the method often involve a high numerical effort, motivating the question in how far High Performance Computing (HPC) can leverage the approach. In this work we report on a multi-level parallelisation of the underlying computational machinery and show that this can speed up the code by several orders of magnitude. This in turn can extend the applicability of the method to otherwise inaccessible cases. We exploit three levels of parallelisation: Distributed computing by means of Message Passing (MPI), shared-memory computing using OpenMP, and vectorisation by means of SIMD units (single-instruction-multiple-data). Results are provided for two distinct High Performance Computing (HPC) platforms, namely the IBM-based BlueGene/Q system JUQUEEN and an Intel Sandy-Bridge-based development cluster. We discuss how certain issues and obstacles were overcome in the course of adapting the code. Most importantly, we conclude that this vast improvement can actually be accomplished by introducing only moderate changes to the code, such that this strategy may serve as a guideline for other researcher to likewise improve the efficiency of their codes.

  6. A Series of Computational Neuroscience Labs Increases Comfort with MATLAB.

    PubMed

    Nichols, David F

    2015-01-01

    Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience.

  7. A Series of Computational Neuroscience Labs Increases Comfort with MATLAB

    PubMed Central

    Nichols, David F.

    2015-01-01

    Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience. PMID:26557798

  8. Multidisciplinary optimization of an HSCT wing using a response surface methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giunta, A.A.; Grossman, B.; Mason, W.H.

    1994-12-31

    Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less

  9. Hyperswitch Communication Network Computer

    NASA Technical Reports Server (NTRS)

    Peterson, John C.; Chow, Edward T.; Priel, Moshe; Upchurch, Edwin T.

    1993-01-01

    Hyperswitch Communications Network (HCN) computer is prototype multiple-processor computer being developed. Incorporates improved version of hyperswitch communication network described in "Hyperswitch Network For Hypercube Computer" (NPO-16905). Designed to support high-level software and expansion of itself. HCN computer is message-passing, multiple-instruction/multiple-data computer offering significant advantages over older single-processor and bus-based multiple-processor computers, with respect to price/performance ratio, reliability, availability, and manufacturing. Design of HCN operating-system software provides flexible computing environment accommodating both parallel and distributed processing. Also achieves balance among following competing factors; performance in processing and communications, ease of use, and tolerance of (and recovery from) faults.

  10. Information Science Research: The Search for the Nature of Information.

    ERIC Educational Resources Information Center

    Kochen, Manfred

    1984-01-01

    High-level scientific research in the information sciences is illustrated by sampling of recent discoveries involving adaptive information processing strategies, computer and information systems, centroid scaling, economic growth of computer and communication industries, and information flow in biological systems. Relationship of information…

  11. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  12. Instructor Perspectives of Multiple-Choice Questions in Summative Assessment for Novice Programmers

    ERIC Educational Resources Information Center

    Shuhidan, Shuhaida; Hamilton, Margaret; D'Souza, Daryl

    2010-01-01

    Learning to program is known to be difficult for novices. High attrition and high failure rates in foundation-level programming courses undertaken at tertiary level in Computer Science programs, are commonly reported. A common approach to evaluating novice programming ability is through a combination of formative and summative assessments, with…

  13. Computational Workbench for Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2007-01-01

    PyCraft is a computer program that provides an interactive, workbenchlike computing environment for developing and testing algorithms for multibody dynamics. Examples of multibody dynamic systems amenable to analysis with the help of PyCraft include land vehicles, spacecraft, robots, and molecular models. PyCraft is based on the Spatial-Operator- Algebra (SOA) formulation for multibody dynamics. The SOA operators enable construction of simple and compact representations of complex multibody dynamical equations. Within the Py-Craft computational workbench, users can, essentially, use the high-level SOA operator notation to represent the variety of dynamical quantities and algorithms and to perform computations interactively. PyCraft provides a Python-language interface to underlying C++ code. Working with SOA concepts, a user can create and manipulate Python-level operator classes in order to implement and evaluate new dynamical quantities and algorithms. During use of PyCraft, virtually all SOA-based algorithms are available for computational experiments.

  14. Predicting Development of Mathematical Word Problem Solving Across the Intermediate Grades

    PubMed Central

    Tolar, Tammy D.; Fuchs, Lynn; Cirino, Paul T.; Fuchs, Douglas; Hamlett, Carol L.; Fletcher, Jack M.

    2012-01-01

    This study addressed predictors of the development of word problem solving (WPS) across the intermediate grades. At beginning of 3rd grade, 4 cohorts of students (N = 261) were measured on computation, language, nonverbal reasoning skills, and attentive behavior and were assessed 4 times from beginning of 3rd through end of 5th grade on 2 measures of WPS at low and high levels of complexity. Language skills were related to initial performance at both levels of complexity and did not predict growth at either level. Computational skills had an effect on initial performance in low- but not high-complexity problems and did not predict growth at either level of complexity. Attentive behavior did not predict initial performance but did predict growth in low-complexity, whereas it predicted initial performance but not growth for high-complexity problems. Nonverbal reasoning predicted initial performance and growth for low-complexity WPS, but only growth for high-complexity WPS. This evidence suggests that although mathematical structure is fixed, different cognitive resources may act as limiting factors in WPS development when the WPS context is varied. PMID:23325985

  15. High-order time-marching reinitialization for regional level-set functions

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-02-01

    In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.

  16. QSPIN: A High Level Java API for Quantum Computing Experimentation

    NASA Technical Reports Server (NTRS)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  17. Workload Characterization of CFD Applications Using Partial Differential Equation Solvers

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.

  18. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozacik, Stephen

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  19. Adaptation of a Control Center Development Environment for Industrial Process Control

    NASA Technical Reports Server (NTRS)

    Killough, Ronnie L.; Malik, James M.

    1994-01-01

    In the control center, raw telemetry data is received for storage, display, and analysis. This raw data must be combined and manipulated in various ways by mathematical computations to facilitate analysis, provide diversified fault detection mechanisms, and enhance display readability. A development tool called the Graphical Computation Builder (GCB) has been implemented which provides flight controllers with the capability to implement computations for use in the control center. The GCB provides a language that contains both general programming constructs and language elements specifically tailored for the control center environment. The GCB concept allows staff who are not skilled in computer programming to author and maintain computer programs. The GCB user is isolated from the details of external subsystem interfaces and has access to high-level functions such as matrix operators, trigonometric functions, and unit conversion macros. The GCB provides a high level of feedback during computation development that improves upon the often cryptic errors produced by computer language compilers. An equivalent need can be identified in the industrial data acquisition and process control domain: that of an integrated graphical development tool tailored to the application to hide the operating system, computer language, and data acquisition interface details. The GCB features a modular design which makes it suitable for technology transfer without significant rework. Control center-specific language elements can be replaced by elements specific to industrial process control.

  20. Embedded Implementation of VHR Satellite Image Segmentation

    PubMed Central

    Li, Chao; Balla-Arabé, Souleymane; Ginhac, Dominique; Yang, Fan

    2016-01-01

    Processing and analysis of Very High Resolution (VHR) satellite images provide a mass of crucial information, which can be used for urban planning, security issues or environmental monitoring. However, they are computationally expensive and, thus, time consuming, while some of the applications, such as natural disaster monitoring and prevention, require high efficiency performance. Fortunately, parallel computing techniques and embedded systems have made great progress in recent years, and a series of massively parallel image processing devices, such as digital signal processors or Field Programmable Gate Arrays (FPGAs), have been made available to engineers at a very convenient price and demonstrate significant advantages in terms of running-cost, embeddability, power consumption flexibility, etc. In this work, we designed a texture region segmentation method for very high resolution satellite images by using the level set algorithm and the multi-kernel theory in a high-abstraction C environment and realize its register-transfer level implementation with the help of a new proposed high-level synthesis-based design flow. The evaluation experiments demonstrate that the proposed design can produce high quality image segmentation with a significant running-cost advantage. PMID:27240370

  1. Adiabatic Quantum Computing via the Rydberg Blockade

    NASA Astrophysics Data System (ADS)

    Keating, Tyler; Goyal, Krittika; Deutsch, Ivan

    2012-06-01

    We study an architecture for implementing adiabatic quantum computation with trapped neutral atoms. Ground state atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism, thereby providing the requisite entangling interactions. As a benchmark we study the performance of a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. We model a realistic architecture, including the effects of magnetic level structure, with qubits encoded into the clock states of ^133Cs, effective B-fields implemented through microwaves and light shifts, and atom-atom coupling achieved by excitation to a high-lying Rydberg level. Including the fundamental effects of photon scattering we find a high fidelity for the two-qubit implementation.

  2. PICSiP: new system-in-package technology using a high bandwidth photonic interconnection layer for converged microsystems

    NASA Astrophysics Data System (ADS)

    Tekin, Tolga; Töpper, Michael; Reichl, Herbert

    2009-05-01

    Technological frontiers between semiconductor technology, packaging, and system design are disappearing. Scaling down geometries [1] alone does not provide improvement of performance, less power, smaller size, and lower cost. It will require "More than Moore" [2] through the tighter integration of system level components at the package level. System-in-Package (SiP) will deliver the efficient use of three dimensions (3D) through innovation in packaging and interconnect technology. A key bottleneck to the implementation of high-performance microelectronic systems, including SiP, is the lack of lowlatency, high-bandwidth, and high density off-chip interconnects. Some of the challenges in achieving high-bandwidth chip-to-chip communication using electrical interconnects include the high losses in the substrate dielectric, reflections and impedance discontinuities, and susceptibility to crosstalk [3]. Obviously, the incentive for the use of photonics to overcome the challenges and leverage low-latency and highbandwidth communication will enable the vision of optical computing within next generation architectures. Supercomputers of today offer sustained performance of more than petaflops, which can be increased by utilizing optical interconnects. Next generation computing architectures are needed with ultra low power consumption; ultra high performance with novel interconnection technologies. In this paper we will discuss a CMOS compatible underlying technology to enable next generation optical computing architectures. By introducing a new optical layer within the 3D SiP, the development of converged microsystems, deployment for next generation optical computing architecture will be leveraged.

  3. Graph Coarsening for Path Finding in Cybersecurity Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie A.; Johnson, John R.; Halappanavar, Mahantesh

    2013-01-01

    n the pass-the-hash attack, hackers repeatedly steal password hashes and move through a computer network with the goal of reaching a computer with high level administrative privileges. In this paper we apply graph coarsening in network graphs for the purpose of detecting hackers using this attack or assessing the risk level of the network's current state. We repeatedly take graph minors, which preserve the existence of paths in the graph, and take powers of the adjacency matrix to count the paths. This allows us to detect the existence of paths as well as find paths that have high risk ofmore » being used by adversaries.« less

  4. 1:1 Computing Programs: An Analysis of the Stages of Concerns of 1:1 Integration, Professional Development Training and Level of Classroom Use by Illinois High School Teachers

    ERIC Educational Resources Information Center

    Detering, Brad

    2017-01-01

    This research study, grounded in the theoretical framework of education change, used the Concerns-Based Adoption Model of change to examine the concerns of Illinois high school teachers and administrators regarding the implementation of 1:1 computing programs. A quantitative study of educators investigated the stages of concern and the mathematics…

  5. [Economic efficiency of computer monitoring of health].

    PubMed

    Il'icheva, N P; Stazhadze, L L

    2001-01-01

    Presents the method of computer monitoring of health, based on utilization of modern information technologies in public health. The method helps organize preventive activities of an outpatient clinic at a high level and essentially decrease the time and money loss. Efficiency of such preventive measures, increased number of computer and Internet users suggests that such methods are promising and further studies in this field are needed.

  6. Computational Tools for Stem Cell Biology

    PubMed Central

    Bian, Qin; Cahan, Patrick

    2016-01-01

    For over half a century, the field of developmental biology has leveraged computation to explore mechanisms of developmental processes. More recently, computational approaches have been critical in the translation of high throughput data into knowledge of both developmental and stem cell biology. In the last several years, a new sub-discipline of computational stem cell biology has emerged that synthesizes the modeling of systems-level aspects of stem cells with high-throughput molecular data. In this review, we provide an overview of this new field and pay particular attention to the impact that single-cell transcriptomics is expected to have on our understanding of development and our ability to engineer cell fate. PMID:27318512

  7. Computational Tools for Stem Cell Biology.

    PubMed

    Bian, Qin; Cahan, Patrick

    2016-12-01

    For over half a century, the field of developmental biology has leveraged computation to explore mechanisms of developmental processes. More recently, computational approaches have been critical in the translation of high throughput data into knowledge of both developmental and stem cell biology. In the past several years, a new subdiscipline of computational stem cell biology has emerged that synthesizes the modeling of systems-level aspects of stem cells with high-throughput molecular data. In this review, we provide an overview of this new field and pay particular attention to the impact that single cell transcriptomics is expected to have on our understanding of development and our ability to engineer cell fate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Graphics Processors in HEP Low-Level Trigger Systems

    NASA Astrophysics Data System (ADS)

    Ammendola, Roberto; Biagioni, Andrea; Chiozzi, Stefano; Cotta Ramusino, Angelo; Cretaro, Paolo; Di Lorenzo, Stefano; Fantechi, Riccardo; Fiorini, Massimiliano; Frezza, Ottorino; Lamanna, Gianluca; Lo Cicero, Francesca; Lonardo, Alessandro; Martinelli, Michele; Neri, Ilaria; Paolucci, Pier Stanislao; Pastorelli, Elena; Piandani, Roberto; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Vicini, Piero

    2016-11-01

    Usage of Graphics Processing Units (GPUs) in the so called general-purpose computing is emerging as an effective approach in several fields of science, although so far applications have been employing GPUs typically for offline computations. Taking into account the steady performance increase of GPU architectures in terms of computing power and I/O capacity, the real-time applications of these devices can thrive in high-energy physics data acquisition and trigger systems. We will examine the use of online parallel computing on GPUs for the synchronous low-level trigger, focusing on tests performed on the trigger system of the CERN NA62 experiment. To successfully integrate GPUs in such an online environment, latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Furthermore, it is assessed how specific trigger algorithms can be parallelized and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen Large Hadron Collider (LHC) luminosity upgrade where highly selective algorithms will be essential to maintain sustainable trigger rates with very high pileup.

  9. Vectorization for Molecular Dynamics on Intel Xeon Phi Corpocessors

    NASA Astrophysics Data System (ADS)

    Yi, Hongsuk

    2014-03-01

    Many modern processors are capable of exploiting data-level parallelism through the use of single instruction multiple data (SIMD) execution. The new Intel Xeon Phi coprocessor supports 512 bit vector registers for the high performance computing. In this paper, we have developed a hierarchical parallelization scheme for accelerated molecular dynamics simulations with the Terfoff potentials for covalent bond solid crystals on Intel Xeon Phi coprocessor systems. The scheme exploits multi-level parallelism computing. We combine thread-level parallelism using a tightly coupled thread-level and task-level parallelism with 512-bit vector register. The simulation results show that the parallel performance of SIMD implementations on Xeon Phi is apparently superior to their x86 CPU architecture.

  10. Rovibrational bound states of SO2 isotopologues. I: Total angular momentum J = 0-10

    NASA Astrophysics Data System (ADS)

    Kumar, Praveen; Ellis, Joseph; Poirier, Bill

    2015-04-01

    Isotopic variation of the rovibrational bound states of SO2 for the four stable sulfur isotopes 32-34,36S is investigated in comprehensive detail. In a two-part series, we compute the low-lying energy levels for all values of total angular momentum in the range J = 0-20. All rovibrational levels are computed, to an extremely high level of numerical convergence. The calculations have been carried out using the ScalIT suite of parallel codes. The present study (Paper I) examines the J = 0-10 rovibrational levels, providing unambiguous symmetry and rovibrational label assignments for each computed state. The calculated vibrational energy levels exhibit very good agreement with previously reported experimental and theoretical data. Rovibrational energy levels, calculated without any Coriolis approximations, are reported here for the first time. Among other potential ramifications, this data will facilitate understanding of the origin of mass-independent fractionation of sulfur isotopes in the Archean rock record-of great relevance for understanding the "oxygen revolution".

  11. Stream Splitting in Support of Intrusion Detection

    DTIC Science & Technology

    2003-06-01

    increased. Every computer on the Internet has no need to see the traffic of every other computer on the Internet. Indeed if this was so, nothing would get ...distinguishes the stream splitter from other network analysis tools. B. HIGH LEVEL DESIGN To get the desired level of performance, a multi-threaded...of greater concern than added accuracy of a Bayesian model. This is a case where close is good enough . b. PassiveSensors Though similar to active

  12. Chip-scale integrated optical interconnects: a key enabler for future high-performance computing

    NASA Astrophysics Data System (ADS)

    Haney, Michael; Nair, Rohit; Gu, Tian

    2012-01-01

    High Performance Computing (HPC) systems are putting ever-increasing demands on the throughput efficiency of their interconnection fabrics. In this paper, the limits of conventional metal trace-based inter-chip interconnect fabrics are examined in the context of state-of-the-art HPC systems, which currently operate near the 1 GFLOPS/W level. The analysis suggests that conventional metal trace interconnects will limit performance to approximately 6 GFLOPS/W in larger HPC systems that require many computer chips to be interconnected in parallel processing architectures. As the HPC communications bottlenecks push closer to the processing chips, integrated Optical Interconnect (OI) technology may provide the ultra-high bandwidths needed at the inter- and intra-chip levels. With inter-chip photonic link energies projected to be less than 1 pJ/bit, integrated OI is projected to enable HPC architecture scaling to the 50 GFLOPS/W level and beyond - providing a path to Peta-FLOPS-level HPC within a single rack, and potentially even Exa-FLOPSlevel HPC for large systems. A new hybrid integrated chip-scale OI approach is described and evaluated. The concept integrates a high-density polymer waveguide fabric directly on top of a multiple quantum well (MQW) modulator array that is area-bonded to the Silicon computing chip. Grayscale lithography is used to fabricate 5 μm x 5 μm polymer waveguides and associated novel small-footprint total internal reflection-based vertical input/output couplers directly onto a layer containing an array of GaAs MQW devices configured to be either absorption modulators or photodetectors. An external continuous wave optical "power supply" is coupled into the waveguide links. Contrast ratios were measured using a test rider chip in place of a Silicon processing chip. The results suggest that sub-pJ/b chip-scale communication is achievable with this concept. When integrated into high-density integrated optical interconnect fabrics, it could provide a seamless interconnect fabric spanning the intra-

  13. Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic

    NASA Astrophysics Data System (ADS)

    Narendran, S.; Selvakumar, J.

    2018-04-01

    Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.

  14. OS friendly microprocessor architecture: Hardware level computer security

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  15. Exploration of operator method digital optical computers for application to NASA

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Digital optical computer design has been focused primarily towards parallel (single point-to-point interconnection) implementation. This architecture is compared to currently developing VHSIC systems. Using demonstrated multichannel acousto-optic devices, a figure of merit can be formulated. The focus is on a figure of merit termed Gate Interconnect Bandwidth Product (GIBP). Conventional parallel optical digital computer architecture demonstrates only marginal competitiveness at best when compared to projected semiconductor implements. Global, analog global, quasi-digital, and full digital interconnects are briefly examined as alternative to parallel digital computer architecture. Digital optical computing is becoming a very tough competitor to semiconductor technology since it can support a very high degree of three dimensional interconnect density and high degrees of Fan-In without capacitive loading effects at very low power consumption levels.

  16. Teaching Computer Science Courses in Distance Learning

    ERIC Educational Resources Information Center

    Huan, Xiaoli; Shehane, Ronald; Ali, Adel

    2011-01-01

    As the success of distance learning (DL) has driven universities to increase the courses offered online, certain challenges arise when teaching computer science (CS) courses to students who are not physically co-located and have individual learning schedules. Teaching CS courses involves high level demonstrations and interactivity between the…

  17. Understanding Initial Undergraduate Expectations and Identity in Computing Studies

    ERIC Educational Resources Information Center

    Kinnunen, Päivi; Butler, Matthew; Morgan, Michael; Nylen, Aletta; Peters, Anne-Kathrin; Sinclair, Jane; Kalvala, Sara; Pesonen, Erkki

    2018-01-01

    There is growing appreciation of the importance of understanding the student perspective in Higher Education (HE) at both institutional and international levels. This is particularly important in Science, Technology, Engineering and Mathematics subjects such as Computer Science (CS) and Engineering in which industry needs are high but so are…

  18. Integrating Technology to Maximize Learning

    ERIC Educational Resources Information Center

    Jones, Eric

    2007-01-01

    Such initiatives as one-to-one computing, laptop learning, and technology immersion are gaining momentum in middle level and high schools, but the key to their success is more than cutting-edge technology. Henrico County Public Schools, a pioneer in educational technology in Virginia, launched a one-to-one computing initiative in 2001. The…

  19. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Time Tagging the Data

    DTIC Science & Technology

    2015-09-01

    this report made use of posttest processing techniques to provide packet-level time tagging with an accuracy close to 3 µs relative to Coordinated...h set of test records. The process described herein made use of posttest processing techniques to provide packet-level time tagging with an accuracy

  20. Science of Security Lablet - Scalability and Usability

    DTIC Science & Technology

    2014-12-16

    mobile computing [19]. However, the high-level infrastructure design and our own implementation (both described throughout this paper) can easily...critical and infrastructural systems demands high levels of sophistication in the technical aspects of cybersecurity, software and hardware design...Forget, S. Komanduri, Alessandro Acquisti, Nicolas Christin, Lorrie Cranor, Rahul Telang. "Security Behavior Observatory: Infrastructure for Long-term

  1. Influence of the Level Density Parametrization on the Effective GDR Width at High Spins

    NASA Astrophysics Data System (ADS)

    Mazurek, K.; Matejska, M.; Kmiecik, M.; Maj, A.; Dudek, J.

    Parameterizations of the nucleonic level densities are tested by computing the effective GDR strength-functions and GDR widths at high spins. Calculations are based on the thermal shape fluctuation method with the Lublin-Strasbourg Drop (LSD) model. Results for 106Sn, 147Eu, 176W, 194Hg are compared to the experimental data.

  2. A Social Approach to High-Level Context Generation for Supporting Context-Aware M-Learning

    ERIC Educational Resources Information Center

    Pan, Xu-Wei; Ding, Ling; Zhu, Xi-Yong; Yang, Zhao-Xiang

    2017-01-01

    In m-learning environments, context-awareness is for wide use where learners' situations are varied, dynamic and unpredictable. We are facing the challenge of requirements of both generality and depth in generating and processing high-level context. In this paper, we present a social approach which exploits social dynamics and social computing for…

  3. Graphics Processing Units for HEP trigger systems

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  4. The reliability of continuous brain responses during naturalistic listening to music.

    PubMed

    Burunat, Iballa; Toiviainen, Petri; Alluri, Vinoo; Bogert, Brigitte; Ristaniemi, Tapani; Sams, Mikko; Brattico, Elvira

    2016-01-01

    Low-level (timbral) and high-level (tonal and rhythmical) musical features during continuous listening to music, studied by functional magnetic resonance imaging (fMRI), have been shown to elicit large-scale responses in cognitive, motor, and limbic brain networks. Using a similar methodological approach and a similar group of participants, we aimed to study the replicability of previous findings. Participants' fMRI responses during continuous listening of a tango Nuevo piece were correlated voxelwise against the time series of a set of perceptually validated musical features computationally extracted from the music. The replicability of previous results and the present study was assessed by two approaches: (a) correlating the respective activation maps, and (b) computing the overlap of active voxels between datasets at variable levels of ranked significance. Activity elicited by timbral features was better replicable than activity elicited by tonal and rhythmical ones. These results indicate more reliable processing mechanisms for low-level musical features as compared to more high-level features. The processing of such high-level features is probably more sensitive to the state and traits of the listeners, as well as of their background in music. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352

    2015-09-01

    In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less

  6. Computer architecture evaluation for structural dynamics computations: Project summary

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  7. Human-Computer Interaction and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    1995-01-01

    The proceedings of the Workshop on Human-Computer Interaction and Virtual Environments are presented along with a list of attendees. The objectives of the workshop were to assess the state-of-technology and level of maturity of several areas in human-computer interaction and to provide guidelines for focused future research leading to effective use of these facilities in the design/fabrication and operation of future high-performance engineering systems.

  8. Information Systems at Enterprise. Design of Secure Network of Enterprise

    NASA Astrophysics Data System (ADS)

    Saigushev, N. Y.; Mikhailova, U. V.; Vedeneeva, O. A.; Tsaran, A. A.

    2018-05-01

    No enterprise and company can do without designing its own corporate network in today's information society. It accelerates and facilitates the work of employees at any level, but contains a big threat to confidential information of the company. In addition to the data theft attackers, there are plenty of information threats posed by modern malware effects. In this regard, the computational security of corporate networks is an important component of modern information technologies of computer security for any enterprise. This article says about the design of the protected corporate network of the enterprise that provides the computers on the network access to the Internet, as well interoperability with the branch. The access speed to the Internet at a high level is provided through the use of high-speed access channels and load balancing between devices. The security of the designed network is performed through the use of VLAN technology as well as access lists and AAA server.

  9. [Changes in the P300 amplitude under the influence of "aggressive" computer game in adolescents with various levels of initial aggression and conflicting behavior].

    PubMed

    Grigorian, V G; Stepanian, L S; Stepanian, A Iu; Agababian, A R

    2007-01-01

    Dynamic changes in the amplitude of component P300 of the evoked potentials in different cortical areas were studied as an index of activity of cortical structures responsible for actualization of a computer game with aggressive content with regard for the level of initial aggression and conflict in behavior of adolescent subjects. Dynamic changes in anxiety and aggression evoked by playing an "aggressive" computer game were shown to be dependent on the initial level of aggression and conflict. An increase in P300 in the frontal and orbitofrontal areas of both hemispheres was observed in adolescents with initially high level of aggression and conflict. In adolescents with initially low aggression and conflict, P300 decreased bilaterally in the frontal areas and did not change significantly in the orbitofrontal areas. These findings testify to the bilateral frontal top-down control over negative emotions.

  10. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.

    PubMed

    Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).

  11. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer

    PubMed Central

    Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954

  12. Overview of Threats and Failure Models for Safety-Relevant Computer-Based Systems

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This document presents a high-level overview of the threats to safety-relevant computer-based systems, including (1) a description of the introduction and activation of physical and logical faults; (2) the propagation of their effects; and (3) function-level and component-level error and failure mode models. These models can be used in the definition of fault hypotheses (i.e., assumptions) for threat-risk mitigation strategies. This document is a contribution to a guide currently under development that is intended to provide a general technical foundation for designers and evaluators of safety-relevant systems.

  13. Software on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    . Development Tools View list of tools for build automation, version control, and high-level or specialized scripting. Toolchains Learn about the available toolchains to build applications from source code

  14. Towards anatomic scale agent-based modeling with a massively parallel spatially explicit general-purpose model of enteric tissue (SEGMEnT_HPC).

    PubMed

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.

  15. A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers

    NASA Technical Reports Server (NTRS)

    Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)

    1997-01-01

    The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.

  16. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model.

    PubMed

    Plotnikov, Nikolay V

    2014-08-12

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.

  17. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model

    PubMed Central

    2015-01-01

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268

  18. Training a cell-level classifier for detecting basal-cell carcinoma by combining human visual attention maps with low-level handcrafted features

    PubMed Central

    Corredor, Germán; Whitney, Jon; Arias, Viviana; Madabhushi, Anant; Romero, Eduardo

    2017-01-01

    Abstract. Computational histomorphometric approaches typically use low-level image features for building machine learning classifiers. However, these approaches usually ignore high-level expert knowledge. A computational model (M_im) combines low-, mid-, and high-level image information to predict the likelihood of cancer in whole slide images. Handcrafted low- and mid-level features are computed from area, color, and spatial nuclei distributions. High-level information is implicitly captured from the recorded navigations of pathologists while exploring whole slide images during diagnostic tasks. This model was validated by predicting the presence of cancer in a set of unseen fields of view. The available database was composed of 24 cases of basal-cell carcinoma, from which 17 served to estimate the model parameters and the remaining 7 comprised the evaluation set. A total of 274 fields of view of size 1024×1024  pixels were extracted from the evaluation set. Then 176 patches from this set were used to train a support vector machine classifier to predict the presence of cancer on a patch-by-patch basis while the remaining 98 image patches were used for independent testing, ensuring that the training and test sets do not comprise patches from the same patient. A baseline model (M_ex) estimated the cancer likelihood for each of the image patches. M_ex uses the same visual features as M_im, but its weights are estimated from nuclei manually labeled as cancerous or noncancerous by a pathologist. M_im achieved an accuracy of 74.49% and an F-measure of 80.31%, while M_ex yielded corresponding accuracy and F-measures of 73.47% and 77.97%, respectively. PMID:28382314

  19. A meta-analysis of outcomes from the use of computer-simulated experiments in science education

    NASA Astrophysics Data System (ADS)

    Lejeune, John Van

    The purpose of this study was to synthesize the findings from existing research on the effects of computer simulated experiments on students in science education. Results from 40 reports were integrated by the process of meta-analysis to examine the effect of computer-simulated experiments and interactive videodisc simulations on student achievement and attitudes. Findings indicated significant positive differences in both low-level and high-level achievement of students who use computer-simulated experiments and interactive videodisc simulations as compared to students who used more traditional learning activities. No significant differences in retention, student attitudes toward the subject, or toward the educational method were found. Based on the findings of this study, computer-simulated experiments and interactive videodisc simulations should be used to enhance students' learning in science, especially in cases where the use of traditional laboratory activities are expensive, dangerous, or impractical.

  20. Multicore Challenges and Benefits for High Performance Scientific Computing

    DOE PAGES

    Nielsen, Ida M. B.; Janssen, Curtis L.

    2008-01-01

    Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

  1. Two-Level Weld-Material Homogenization for Efficient Computational Analysis of Welded Structure Blast-Survivability

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Arakere, G.; Hariharan, A.; Pandurangan, B.

    2012-06-01

    The introduction of newer joining technologies like the so-called friction-stir welding (FSW) into automotive engineering entails the knowledge of the joint-material microstructure and properties. Since, the development of vehicles (including military vehicles capable of surviving blast and ballistic impacts) nowadays involves extensive use of the computational engineering analyses (CEA), robust high-fidelity material models are needed for the FSW joints. A two-level material-homogenization procedure is proposed and utilized in this study to help manage computational cost and computer storage requirements for such CEAs. The method utilizes experimental (microstructure, microhardness, tensile testing, and x-ray diffraction) data to construct: (a) the material model for each weld zone and (b) the material model for the entire weld. The procedure is validated by comparing its predictions with the predictions of more detailed but more costly computational analyses.

  2. Ergonomic assessment for the task of repairing computers in a manufacturing company: A case study.

    PubMed

    Maldonado-Macías, Aidé; Realyvásquez, Arturo; Hernández, Juan Luis; García-Alcaraz, Jorge

    2015-01-01

    Manufacturing industry workers who repair computers may be exposed to ergonomic risk factors. This project analyzes the tasks involved in the computer repair process to (1) find the risk level for musculoskeletal disorders (MSDs) and (2) propose ergonomic interventions to address any ergonomic issues. Work procedures and main body postures were video recorded and analyzed using task analysis, the Rapid Entire Body Assessment (REBA) postural method, and biomechanical analysis. High risk for MSDs was found on every subtask using REBA. Although biomechanical analysis found an acceptable mass center displacement during tasks, a hazardous level of compression on the lower back during computer's transportation was detected. This assessment found ergonomic risks mainly in the trunk, arm/forearm, and legs; the neck and hand/wrist were also compromised. Opportunities for ergonomic analyses and interventions in the design and execution of computer repair tasks are discussed.

  3. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less

  4. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; et al.

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less

  5. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  6. Photoionization and High Density Gas

    NASA Technical Reports Server (NTRS)

    Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.

  7. Impact of Educational Level on Study Attrition and Evaluation of Web-Based Computer-Tailored Interventions: Results From Seven Randomized Controlled Trials.

    PubMed

    Reinwand, Dominique A; Crutzen, Rik; Elfeddali, Iman; Schneider, Francine; Schulz, Daniela Nadine; Smit, Eline Suzanne; Stanczyk, Nicola Esther; Tange, Huibert; Voncken-Brewster, Viola; Walthouwer, Michel Jean Louis; Hoving, Ciska; de Vries, Hein

    2015-10-07

    Web-based computer-tailored interventions have shown to be effective in improving health behavior; however, high dropout attrition is a major issue in these interventions. The aim of this study is to assess whether people with a lower educational level drop out from studies more frequently compared to people with a higher educational level and to what extent this depends on evaluation of these interventions. Data from 7 randomized controlled trials of Web-based computer-tailored interventions were used to investigate dropout rates among participants with different educational levels. To be able to compare higher and lower educated participants, intervention evaluation was assessed by pooling data from these studies. Logistic regression analysis was used to assess whether intervention evaluation predicted dropout at follow-up measurements. In 3 studies, we found a higher study dropout attrition rate among participants with a lower educational level, whereas in 2 studies we found that middle educated participants had a higher dropout attrition rate compared to highly educated participants. In 4 studies, no such significant difference was found. Three of 7 studies showed that participants with a lower or middle educational level evaluated the interventions significantly better than highly educated participants ("Alcohol-Everything within the Limit": F2,376=5.97, P=.003; "My Healthy Behavior": F2,359=5.52, P=.004; "Master Your Breath": F2,317=3.17, P=.04). One study found lower intervention evaluation by lower educated participants compared to participants with a middle educational level ("Weight in Balance": F2,37=3.17, P=.05). Low evaluation of the interventions was not a significant predictor of dropout at a later follow-up measurement in any of the studies. Dropout attrition rates were higher among participants with a lower or middle educational level compared with highly educated participants. Although lower educated participants evaluated the interventions better in approximately half of the studies, evaluation did not predict dropout attrition. Further research is needed to find other explanations for high dropout rates among lower educated participants.

  8. Common data buffer

    NASA Technical Reports Server (NTRS)

    Byrne, F.

    1981-01-01

    Time-shared interface speeds data processing in distributed computer network. Two-level high-speed scanning approach routes information to buffer, portion of which is reserved for series of "first-in, first-out" memory stacks. Buffer address structure and memory are protected from noise or failed components by error correcting code. System is applicable to any computer or processing language.

  9. Developing Instructional Applications at the Secondary Level. The Computer as a Tool.

    ERIC Educational Resources Information Center

    McManus, Jack; And Others

    Case studies are presented for seven Los Angeles area (California) high schools that worked with Pepperdine University in the IBM/ETS (International Business Machines/Educational Testing Service) Model Schools program, a project which provided training for selected secondary school teachers in the use of personal computers and selected software as…

  10. Cybersecurity Curriculum Development: Introducing Specialties in a Graduate Program

    ERIC Educational Resources Information Center

    Bicak, Ali; Liu, Michelle; Murphy, Diane

    2015-01-01

    The cybersecurity curriculum has grown dramatically over the past decade: once it was just a couple of courses in a computer science graduate program. Today cybersecurity is introduced at the high school level, incorporated into undergraduate computer science and information systems programs, and has resulted in a variety of cybersecurity-specific…

  11. Vocal Sight-Reading Assessment: Technological Advances, Student Perceptions, and Instructional Implications

    ERIC Educational Resources Information Center

    Henry, Michele

    2015-01-01

    This study investigated choral singers' comfort level using computer technology for vocal sight-reading assessment. High school choral singers (N = 138) attending a summer music camp completed a computer-based sight-reading assessment and accompanying pre- and posttest surveys on their musical backgrounds and perceptions about technology. A large…

  12. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    PubMed

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. High level language-based robotic control system

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Inventor); Kruetz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)

    1994-01-01

    This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

  14. High level language-based robotic control system

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Inventor); Kreutz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)

    1996-01-01

    This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

  15. Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena

    2010-09-30

    Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less

  16. Vectorized algorithms for spiking neural network simulation.

    PubMed

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  17. Thermoelastic analysis of spent fuel and high level radioactive waste repositories in salt. A semi-analytical solution. [JUDITH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St. John, C.M.

    1977-04-01

    An underground repository containing heat generating, High Level Waste or Spent Unreprocessed Fuel may be approximated as a finite number of heat sources distributed across the plane of the repository. The resulting temperature, displacement and stress changes may be calculated using analytical solutions, providing linear thermoelasticity is assumed. This report documents a computer program based on this approach and gives results that form the basis for a comparison between the effects of disposing of High Level Waste and Spent Unreprocessed Fuel.

  18. Customized binary and multi-level HfO2-x-based memristors tuned by oxidation conditions.

    PubMed

    He, Weifan; Sun, Huajun; Zhou, Yaxiong; Lu, Ke; Xue, Kanhao; Miao, Xiangshui

    2017-08-30

    The memristor is a promising candidate for the next generation non-volatile memory, especially based on HfO 2-x , given its compatibility with advanced CMOS technologies. Although various resistive transitions were reported independently, customized binary and multi-level memristors in unified HfO 2-x material have not been studied. Here we report Pt/HfO 2-x /Ti memristors with double memristive modes, forming-free and low operation voltage, which were tuned by oxidation conditions of HfO 2-x films. As O/Hf ratios of HfO 2-x films increase, the forming voltages, SET voltages, and R off /R on windows increase regularly while their resistive transitions undergo from gradually to sharply in I/V sweep. Two memristors with typical resistive transitions were studied to customize binary and multi-level memristive modes, respectively. For binary mode, high-speed switching with 10 3 pulses (10 ns) and retention test at 85 °C (>10 4 s) were achieved. For multi-level mode, the 12-levels stable resistance states were confirmed by ongoing multi-window switching (ranging from 10 ns to 1 μs and completing 10 cycles of each pulse). Our customized binary and multi-level HfO 2-x -based memristors show high-speed switching, multi-level storage and excellent stability, which can be separately applied to logic computing and neuromorphic computing, further suitable for in-memory computing chip when deposition atmosphere may be fine-tuned.

  19. A Strassen-Newton algorithm for high-speed parallelizable matrix inversion

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Ferguson, Helaman R. P.

    1988-01-01

    Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.

  20. GPU-accelerated FDTD modeling of radio-frequency field-tissue interactions in high-field MRI.

    PubMed

    Chi, Jieru; Liu, Feng; Weber, Ewald; Li, Yu; Crozier, Stuart

    2011-06-01

    The analysis of high-field RF field-tissue interactions requires high-performance finite-difference time-domain (FDTD) computing. Conventional CPU-based FDTD calculations offer limited computing performance in a PC environment. This study presents a graphics processing unit (GPU)-based parallel-computing framework, producing substantially boosted computing efficiency (with a two-order speedup factor) at a PC-level cost. Specific details of implementing the FDTD method on a GPU architecture have been presented and the new computational strategy has been successfully applied to the design of a novel 8-element transceive RF coil system at 9.4 T. Facilitated by the powerful GPU-FDTD computing, the new RF coil array offers optimized fields (averaging 25% improvement in sensitivity, and 20% reduction in loop coupling compared with conventional array structures of the same size) for small animal imaging with a robust RF configuration. The GPU-enabled acceleration paves the way for FDTD to be applied for both detailed forward modeling and inverse design of MRI coils, which were previously impractical.

  1. Colt: an experiment in wormhole run-time reconfiguration

    NASA Astrophysics Data System (ADS)

    Bittner, Ray; Athanas, Peter M.; Musgrove, Mark

    1996-10-01

    Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.

  2. Photodissociation of CS from Excited Rovibrational Levels

    NASA Astrophysics Data System (ADS)

    Pattillo, R. J.; Cieszewski, R.; Stancil, P. C.; Forrey, R. C.; Babb, J. F.; McCann, J. F.; McLaughlin, B. M.

    2018-05-01

    Accurate photodissociation cross sections have been computed for transitions from the X 1Σ+ ground electronic state of CS to six low-lying excited electronic states. New ab initio potential curves and transition dipole moment functions have been obtained for these computations using the multi-reference configuration interaction approach with the Davidson correction (MRCI+Q) and aug-cc-pV6Z basis sets. State-resolved cross sections have been computed for transitions from nearly the full range of rovibrational levels of the X 1Σ+ state and for photon wavelengths ranging from 500 Å to threshold. Destruction of CS via predissociation in highly excited electronic states originating from the rovibrational ground state is found to be unimportant. Photodissociation cross sections are presented for temperatures in the range between 1000 and 10,000 K, where a Boltzmann distribution of initial rovibrational levels is assumed. Applications of the current computations to various astrophysical environments are briefly discussed focusing on photodissociation rates due to the standard interstellar and blackbody radiation fields.

  3. High level cognitive information processing in neural networks

    NASA Technical Reports Server (NTRS)

    Barnden, John A.; Fields, Christopher A.

    1992-01-01

    Two related research efforts were addressed: (1) high-level connectionist cognitive modeling; and (2) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic model of local neural circuits, and to understand the computational behavior of such models. In keeping with the nature of NASA's Innovative Research Program, all the work conducted under the grant was highly innovative. For instance, the following ideas, all summarized, are contributions to the study of connectionist/neural networks: (1) the temporal-winner-take-all, relative-position encoding, and pattern-similarity association techniques; (2) the importation of logical combinators into connection; (3) the use of analogy-based reasoning as a bridge across the gap between the traditional symbolic paradigm and the connectionist paradigm; and (4) the application of connectionism to the domain of belief representation/reasoning. The work on local neural circuit modeling also departs significantly from the work of related researchers. In particular, its concentration on low-level neural phenomena that could support high-level cognitive processing is unusual within the area of biological local circuit modeling, and also serves to expand the horizons of the artificial neural net field.

  4. Recent advances in QM/MM free energy calculations using reference potentials.

    PubMed

    Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L

    2015-05-01

    Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.

  5. Synthesizing Results From Empirical Research on Computer-Based Scaffolding in STEM Education

    PubMed Central

    Belland, Brian R.; Walker, Andrew E.; Kim, Nam Ju; Lefler, Mason

    2016-01-01

    Computer-based scaffolding assists students as they generate solutions to complex problems, goals, or tasks, helping increase and integrate their higher order skills in the process. However, despite decades of research on scaffolding in STEM (science, technology, engineering, and mathematics) education, no existing comprehensive meta-analysis has synthesized the results of these studies. This review addresses that need by synthesizing the results of 144 experimental studies (333 outcomes) on the effects of computer-based scaffolding designed to assist the full range of STEM learners (primary through adult education) as they navigated ill-structured, problem-centered curricula. Results of our random effect meta-analysis (a) indicate that computer-based scaffolding showed a consistently positive (ḡ = 0.46) effect on cognitive outcomes across various contexts of use, scaffolding characteristics, and levels of assessment and (b) shed light on many scaffolding debates, including the roles of customization (i.e., fading and adding) and context-specific support. Specifically, scaffolding’s influence on cognitive outcomes did not vary on the basis of context-specificity, presence or absence of scaffolding change, and logic by which scaffolding change is implemented. Scaffolding’s influence was greatest when measured at the principles level and among adult learners. Still scaffolding’s effect was substantial and significantly greater than zero across all age groups and assessment levels. These results suggest that scaffolding is a highly effective intervention across levels of different characteristics and can largely be designed in many different ways while still being highly effective. PMID:28344365

  6. Synthesizing Results From Empirical Research on Computer-Based Scaffolding in STEM Education: A Meta-Analysis.

    PubMed

    Belland, Brian R; Walker, Andrew E; Kim, Nam Ju; Lefler, Mason

    2017-04-01

    Computer-based scaffolding assists students as they generate solutions to complex problems, goals, or tasks, helping increase and integrate their higher order skills in the process. However, despite decades of research on scaffolding in STEM (science, technology, engineering, and mathematics) education, no existing comprehensive meta-analysis has synthesized the results of these studies. This review addresses that need by synthesizing the results of 144 experimental studies (333 outcomes) on the effects of computer-based scaffolding designed to assist the full range of STEM learners (primary through adult education) as they navigated ill-structured, problem-centered curricula. Results of our random effect meta-analysis (a) indicate that computer-based scaffolding showed a consistently positive (ḡ = 0.46) effect on cognitive outcomes across various contexts of use, scaffolding characteristics, and levels of assessment and (b) shed light on many scaffolding debates, including the roles of customization (i.e., fading and adding) and context-specific support. Specifically, scaffolding's influence on cognitive outcomes did not vary on the basis of context-specificity, presence or absence of scaffolding change, and logic by which scaffolding change is implemented. Scaffolding's influence was greatest when measured at the principles level and among adult learners. Still scaffolding's effect was substantial and significantly greater than zero across all age groups and assessment levels. These results suggest that scaffolding is a highly effective intervention across levels of different characteristics and can largely be designed in many different ways while still being highly effective.

  7. Execution environment for intelligent real-time control systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, Janos

    1987-01-01

    Modern telerobot control technology requires the integration of symbolic and non-symbolic programming techniques, different models of parallel computations, and various programming paradigms. The Multigraph Architecture, which has been developed for the implementation of intelligent real-time control systems is described. The layered architecture includes specific computational models, integrated execution environment and various high-level tools. A special feature of the architecture is the tight coupling between the symbolic and non-symbolic computations. It supports not only a data interface, but also the integration of the control structures in a parallel computing environment.

  8. Reducing Communication in Algebraic Multigrid Using Additive Variants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassilevski, Panayot S.; Yang, Ulrike Meier

    Algebraic multigrid (AMG) has proven to be an effective scalable solver on many high performance computers. However, its increasing communication complexity on coarser levels has shown to seriously impact its performance on computers with high communication cost. Moreover, additive AMG variants provide not only increased parallelism as well as decreased numbers of messages per cycle but also generally exhibit slower convergence. Here we present various new additive variants with convergence rates that are significantly improved compared to the classical additive algebraic multigrid method and investigate their potential for decreased communication, and improved communication-computation overlap, features that are essential for goodmore » performance on future exascale architectures.« less

  9. Reducing Communication in Algebraic Multigrid Using Additive Variants

    DOE PAGES

    Vassilevski, Panayot S.; Yang, Ulrike Meier

    2014-02-12

    Algebraic multigrid (AMG) has proven to be an effective scalable solver on many high performance computers. However, its increasing communication complexity on coarser levels has shown to seriously impact its performance on computers with high communication cost. Moreover, additive AMG variants provide not only increased parallelism as well as decreased numbers of messages per cycle but also generally exhibit slower convergence. Here we present various new additive variants with convergence rates that are significantly improved compared to the classical additive algebraic multigrid method and investigate their potential for decreased communication, and improved communication-computation overlap, features that are essential for goodmore » performance on future exascale architectures.« less

  10. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.

  11. BESIU Physical Analysis on Hadoop Platform

    NASA Astrophysics Data System (ADS)

    Huo, Jing; Zang, Dongsong; Lei, Xiaofeng; Li, Qiang; Sun, Gongxing

    2014-06-01

    In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure.

  12. Coupling of a continuum ice sheet model and a discrete element calving model using a scientific workflow system

    NASA Astrophysics Data System (ADS)

    Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut

    2017-04-01

    Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).

  13. A distributed system for fast alignment of next-generation sequencing data.

    PubMed

    Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D

    2010-12-01

    We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.

  14. A high level language for a high performance computer

    NASA Technical Reports Server (NTRS)

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  15. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  16. MATH77, Version 4.0

    NASA Technical Reports Server (NTRS)

    Lawson, Charles L.; Krogh, Fred; Van Snyder, W.; Oken, Carol A.; Mccreary, Faith A.; Lieske, Jay H.; Perrine, Jack; Coffin, Ralph S.; Wayne, Warren J.

    1994-01-01

    MATH77 is high-quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for basic computational processes of science and engineering. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. MATH77 release 4.0 subroutine library designed to be usable on any computer system supporting full ANSI standard FORTRAN 77 language.

  17. Reducing Abstraction in High School Computer Science Education: The Case of Definition, Implementation, and Use of Abstract Data Types

    ERIC Educational Resources Information Center

    Sakhnini, Victoria; Hazzan, Orit

    2008-01-01

    The research presented in this article deals with the difficulties and mental processes involved in the definition, implementation, and use of abstract data types encountered by 12th grade advanced-level computer science students. Research findings are interpreted within the theoretical framework of "reducing abstraction" [Hazzan 1999]. The…

  18. Structural and Thermodynamic Properties of the Argon Dimer: A Computational Chemistry Exercise in Quantum and Statistical Mechanics

    ERIC Educational Resources Information Center

    Halpern, Arthur M.

    2010-01-01

    Using readily available computational applications and resources, students can construct a high-level ab initio potential energy surface (PES) for the argon dimer. From this information, they can obtain detailed molecular constants of the dimer, including its dissociation energy, which compare well with experimental determinations. Using both…

  19. A Methodological Study of a Computer-Managed Instructional Program in High School Physics.

    ERIC Educational Resources Information Center

    Denton, Jon James

    The purpose of this study was to develop and evaluate an instructional model which utilized the computer to produce individually prescribed instructional guides in physics at the secondary school level. The sample consisted of three classes. Of these, two were randomly selected to serve as the treatment groups, e.g., individualized instruction and…

  20. Studying the Effects of Nuclear Weapons Using a Slide-Rule Computer

    ERIC Educational Resources Information Center

    Shastri, Ananda

    2007-01-01

    This paper describes the construction of a slide-rule computer that allows one to quickly determine magnitudes of several effects that result from the detonation of a nuclear device. Suggestions for exercises are also included that allow high school and college-level physics students to explore scenarios involving these effects. It is hoped that…

  1. Effects of Computer Skill on Mouse Move and Click Performance

    ERIC Educational Resources Information Center

    Panagiotakopoulos, Chris; Sarris, Menelaos

    2008-01-01

    This study focuses on the use of computers in the field of education. It reports a series of experimental mouse move and click tasks on constant and moving stimuli. These experiments attempt to explore the efficiency with which individuals of different skill level and age group perform using a mouse. Differences in performance between high-skill…

  2. Molecular-Level Study of the Effect of Prior Axial Compression/Torsion on the Axial-Tensile Strength of PPTA Fibers

    DTIC Science & Technology

    2013-07-16

    Twaron, etc., which are characterized by high specific strength and high specific stiffness. Fibers of this type are often referred to as ‘‘ballistic... high level of penetration resistance against large kinetic energy projectiles, such as bullets, detonated-mine-induced soil ejecta, improvised...increasingly being designed and developed through an extensive use of computer-aided engineering ( CAE ) methods and tools. The utility of these

  3. Changing computing paradigms towards power efficiency

    PubMed Central

    Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro

    2014-01-01

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  4. High School Forum

    ERIC Educational Resources Information Center

    Herron, J. Dudley, Ed.

    1977-01-01

    Discusses uses of programmable pocket electronic calculators in the secondary level classroom. Uses discussed include: grading, laboratory, exercises, computing T-scores, and a quantitative approach to chemical equilibrium. (SL)

  5. Configurable software for satellite graphics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartzman, P D

    An important goal in interactive computer graphics is to provide users with both quick system responses for basic graphics functions and enough computing power for complex calculations. One solution is to have a distributed graphics system in which a minicomputer and a powerful large computer share the work. The most versatile type of distributed system is an intelligent satellite system in which the minicomputer is programmable by the application user and can do most of the work while the large remote machine is used for difficult computations. At New York University, the hardware was configured from available equipment. The levelmore » of system intelligence resulted almost completely from software development. Unlike previous work with intelligent satellites, the resulting system had system control centered in the satellite. It also had the ability to reconfigure software during realtime operation. The design of the system was done at a very high level using set theoretic language. The specification clearly illustrated processor boundaries and interfaces. The high-level specification also produced a compact, machine-independent virtual graphics data structure for picture representation. The software was written in a systems implementation language; thus, only one set of programs was needed for both machines. A user can program both machines in a single language. Tests of the system with an application program indicate that is has very high potential. A major result of this work is the demonstration that a gigantic investment in new hardware is not necessary for computing facilities interested in graphics.« less

  6. Towards Anatomic Scale Agent-Based Modeling with a Massively Parallel Spatially Explicit General-Purpose Model of Enteric Tissue (SEGMEnT_HPC)

    PubMed Central

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis. PMID:25806784

  7. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; Russell, Samuel S.

    2012-01-01

    Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.

  8. Data flow modeling techniques

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  9. The CMS High Level Trigger System: Experience and Future Development

    NASA Astrophysics Data System (ADS)

    Bauer, G.; Behrens, U.; Bowen, M.; Branson, J.; Bukowiec, S.; Cittolin, S.; Coarasa, J. A.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Flossdorf, A.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Hwong, Y. L.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Polese, G.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Shpakov, D.; Simon, S.; Spataru, A. C.; Sumorok, K.

    2012-12-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laros, James H.; Grant, Ryan; Levenhagen, Michael J.

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  11. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  12. Secure Enclaves: An Isolation-centric Approach for Creating Secure High Performance Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A.; Hicks, Susan Elaine

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges formore » the use of shared infrastructure in HPC environments. This report details current state-of-the-art in virtualization, reconfigurable network enclaving via Software Defined Networking (SDN), and storage architectures and bridging techniques for creating secure enclaves in HPC environments.« less

  13. Computer control of a microgravity mammalian cell bioreactor

    NASA Technical Reports Server (NTRS)

    Hall, William A.

    1987-01-01

    The initial steps taken in developing a completely menu driven and totally automated computer control system for a bioreactor are discussed. This bioreactor is an electro-mechanical cell growth system cell requiring vigorous control of slowly changing parameters, many of which are so dynamically interactive that computer control is a necessity. The process computer will have two main functions. First, it will provide continuous environmental control utilizing low signal level transducers as inputs and high powered control devices such as solenoids and motors as outputs. Secondly, it will provide continuous environmental monitoring, including mass data storage and periodic data dumps to a supervisory computer.

  14. Analog synthetic biology.

    PubMed

    Sarpeshkar, R

    2014-03-28

    We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog-digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA-protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations.

  15. Analog synthetic biology

    PubMed Central

    Sarpeshkar, R.

    2014-01-01

    We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog–digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA–protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations. PMID:24567476

  16. A High School Level Course On Robot Design And Construction

    NASA Astrophysics Data System (ADS)

    Sadler, Paul M.; Crandall, Jack L.

    1984-02-01

    The Robotics Design and Construction Class at Sehome High School was developed to offer gifted and/or highly motivated students an in-depth introduction to a modern engineering topic. The course includes instruction in basic electronics, digital and radio electronics, construction skills, robotics literacy, construction of the HERO 1 Heathkit Robot, computer/ robot programming, and voice synthesis. A key element which leads to the success of the course is the involvement of various community assets including manpower and financial assistance. The instructors included a physics/electronics teacher, a computer science teacher, two retired engineers, and an electronics technician.

  17. Comparative phyloinformatics of virus genes at micro and macro levels in a distributed computing environment.

    PubMed

    Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo

    2008-01-01

    Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed--a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the utility of the platform Quascade for deploying distributed and parallelized versions of a variety of computationally intensive phylogenetic algorithms has been shown. Secondly, the analysis of the utilized H5N1 neuraminidase datasets at macro and micro levels has clearly indicated a pattern of spatial clustering of the H5N1 viral isolates based on geographical distribution rather than temporal or host range based clustering.

  18. Pathology informatics questions and answers from the University of Pittsburgh pathology residency informatics rotation.

    PubMed

    Harrison, James H

    2004-01-01

    Effective pathology practice increasingly requires familiarity with concepts in medical informatics that may cover a broad range of topics, for example, traditional clinical information systems, desktop and Internet computer applications, and effective protocols for computer security. To address this need, the University of Pittsburgh (Pittsburgh, Pa) includes a full-time, 3-week rotation in pathology informatics as a required component of pathology residency training. To teach pathology residents general informatics concepts important in pathology practice. We assess the efficacy of the rotation in communicating these concepts using a short-answer examination administered at the end of the rotation. Because the increasing use of computers and the Internet in education and general communications prior to residency training has the potential to communicate key concepts that might not need additional coverage in the rotation, we have also evaluated incoming residents' informatics knowledge using a similar pretest. This article lists 128 questions that cover a range of topics in pathology informatics at a level appropriate for residency training. These questions were used for pretests and posttests in the pathology informatics rotation in the Pathology Residency Program at the University of Pittsburgh for the years 2000 through 2002. With slight modification, the questions are organized here into 15 topic categories within pathology informatics. The answers provided are brief and are meant to orient the reader to the question and suggest the level of detail appropriate in an answer from a pathology resident. A previously published evaluation of the test results revealed that pretest scores did not increase during the 3-year evaluation period, and self-assessed computer skill level correlated with pretest scores, but all pretest scores were low. Posttest scores increased substantially, and posttest scores did not correlate with the self-assessed computer skill level recorded at pretest time. Even residents who rated themselves high in computer skills lacked many concepts important in pathology informatics, and posttest scores showed that residents with both high and low self-assessed skill levels learned pathology informatics concepts effectively.

  19. Interfacing HTCondor-CE with OpenStack

    NASA Astrophysics Data System (ADS)

    Bockelman, B.; Caballero Bejar, J.; Hover, J.

    2017-10-01

    Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.

  20. Diagnostic reference levels of paediatric computed tomography examinations performed at a dedicated Australian paediatric hospital.

    PubMed

    Bibbo, Giovanni; Brown, Scott; Linke, Rebecca

    2016-08-01

    Diagnostic Reference Levels (DRL) of procedures involving ionizing radiation are important tools to optimizing radiation doses delivered to patients and in identifying cases where the levels of doses are unusually high. This is particularly important for paediatric patients undergoing computed tomography (CT) examinations as these examinations are associated with relatively high-dose. Paediatric CT studies, performed at our institution from January 2010 to March 2014, have been retrospectively analysed to determine the 75th and 95th percentiles of both the volume computed tomography dose index (CTDIvol ) and dose-length product (DLP) for the most commonly performed studies to: establish local diagnostic reference levels for paediatric computed tomography examinations performed at our institution, benchmark our DRL with national and international published paediatric values, and determine the compliance of CT radiographer with established protocols. The derived local 75th percentile DRL have been found to be acceptable when compared with those published by the Australian National Radiation Dose Register and two national children's hospitals, and at the international level with the National Reference Doses for the UK. The 95th percentiles of CTDIvol for the various CT examinations have been found to be acceptable values for the CT scanner Dose-Check Notification. Benchmarking CT radiographers shows that they follow the set protocols for the various examinations without significant variations in the machine setting factors. The derivation of DRL has given us the tool to evaluate and improve the performance of our CT service by improved compliance and a reduction in radiation dose to our paediatric patients. We have also been able to benchmark our performance with similar national and international institutions. © 2016 The Royal Australian and New Zealand College of Radiologists.

  1. Legal issues in clouds: towards a risk inventory.

    PubMed

    Djemame, Karim; Barnitzke, Benno; Corrales, Marcelo; Kiran, Mariam; Jiang, Ming; Armstrong, Django; Forgó, Nikolaus; Nwankwo, Iheanyi

    2013-01-28

    Cloud computing technologies have reached a high level of development, yet a number of obstacles still exist that must be overcome before widespread commercial adoption can become a reality. In a cloud environment, end users requesting services and cloud providers negotiate service-level agreements (SLAs) that provide explicit statements of all expectations and obligations of the participants. If cloud computing is to experience widespread commercial adoption, then incorporating risk assessment techniques is essential during SLA negotiation and service operation. This article focuses on the legal issues surrounding risk assessment in cloud computing. Specifically, it analyses risk regarding data protection and security, and presents the requirements of an inherent risk inventory. The usefulness of such a risk inventory is described in the context of the OPTIMIS project.

  2. Special Report: Computational Science — Behind Innovation and Discovery: More, faster, better, moving computational sciences forward—an interview with PNNL's George Michaels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teske, Lisa J.; Michaels, George S.

    2005-10-25

    The fall issue of Breakthroughs will have a special section on CISD. This article falls in that section as the introduction piece for the directorate. I conducted an in-depth interview with George and came away with a lot of notes. Knowing that other articles in the special section are covering the specifics of the various initiatives within the directorate, this is a high-level view from George's perspective. The idea is the help readers (government supporters and funders and potential industry clients) understand the capability and level of service the lab can offer having a research directorate focused on computational andmore » informational sciences.« less

  3. Protein sequence annotation in the genome era: the annotation concept of SWISS-PROT+TREMBL.

    PubMed

    Apweiler, R; Gateau, A; Contrino, S; Martin, M J; Junker, V; O'Donovan, C; Lang, F; Mitaritonna, N; Kappus, S; Bairoch, A

    1997-01-01

    SWISS-PROT is a curated protein sequence database which strives to provide a high level of annotation, a minimal level of redundancy and high level of integration with other databases. Ongoing genome sequencing projects have dramatically increased the number of protein sequences to be incorporated into SWISS-PROT. Since we do not want to dilute the quality standards of SWISS-PROT by incorporating sequences without proper sequence analysis and annotation, we cannot speed up the incorporation of new incoming data indefinitely. However, as we also want to make the sequences available as fast as possible, we introduced TREMBL (TRanslation of EMBL nucleotide sequence database), a supplement to SWISS-PROT. TREMBL consists of computer-annotated entries in SWISS-PROT format derived from the translation of all coding sequences (CDS) in the EMBL nucleotide sequence database, except for CDS already included in SWISS-PROT. While TREMBL is already of immense value, its computer-generated annotation does not match the quality of SWISS-PROTs. The main difference is in the protein functional information attached to sequences. With this in mind, we are dedicating substantial effort to develop and apply computer methods to enhance the functional information attached to TREMBL entries.

  4. Real-time interactive 3D computer stereography for recreational applications

    NASA Astrophysics Data System (ADS)

    Miyazawa, Atsushi; Ishii, Motonaga; Okuzawa, Kazunori; Sakamoto, Ryuuichi

    2008-02-01

    With the increasing calculation costs of 3D computer stereography, low-cost, high-speed implementation of the latter requires effective distribution of computing resources. In this paper, we attempt to re-classify 3D display technologies on the basis of humans' 3D perception, in order to determine what level of presence or reality is required in recreational video game systems. We then discuss the design and implementation of stereography systems in two categories of the new classification.

  5. Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)

    2000-01-01

    HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).

  6. Computer Programming Languages for Health Care

    PubMed Central

    O'Neill, Joseph T.

    1979-01-01

    This paper advocates the use of standard high level programming languages for medical computing. It recommends that U.S. Government agencies having health care missions implement coordinated policies that encourage the use of existing standard languages and the development of new ones, thereby enabling them and the medical computing community at large to share state-of-the-art application programs. Examples are based on a model that characterizes language and language translator influence upon the specification, development, test, evaluation, and transfer of application programs.

  7. Computational adaptive optics for broadband optical interferometric tomography of biological tissue

    NASA Astrophysics Data System (ADS)

    Boppart, Stephen A.

    2015-03-01

    High-resolution real-time tomography of biological tissues is important for many areas of biological investigations and medical applications. Cellular level optical tomography, however, has been challenging because of the compromise between transverse imaging resolution and depth-of-field, the system and sample aberrations that may be present, and the low imaging sensitivity deep in scattering tissues. The use of computed optical imaging techniques has the potential to address several of these long-standing limitations and challenges. Two related techniques are interferometric synthetic aperture microscopy (ISAM) and computational adaptive optics (CAO). Through three-dimensional Fourierdomain resampling, in combination with high-speed OCT, ISAM can be used to achieve high-resolution in vivo tomography with enhanced depth sensitivity over a depth-of-field extended by more than an order-of-magnitude, in realtime. Subsequently, aberration correction with CAO can be performed in a tomogram, rather than to the optical beam of a broadband optical interferometry system. Based on principles of Fourier optics, aberration correction with CAO is performed on a virtual pupil using Zernike polynomials, offering the potential to augment or even replace the more complicated and expensive adaptive optics hardware with algorithms implemented on a standard desktop computer. Interferometric tomographic reconstructions are characterized with tissue phantoms containing sub-resolution scattering particles, and in both ex vivo and in vivo biological tissue. This review will collectively establish the foundation for high-speed volumetric cellular-level optical interferometric tomography in living tissues.

  8. Deficiencies in Basic Knowledge and Skills among High School Business Education Seniors.

    ERIC Educational Resources Information Center

    Goddard, M. Lee

    1982-01-01

    Conducted a study to determine the level of basic skills achievement among Ohio high school business education seniors. Found that these students lacked competency in general knowledge and in computational skills, basic English skills, and typewriting skills. (GC)

  9. Digital optical conversion module

    DOEpatents

    Kotter, D.K.; Rankin, R.A.

    1988-07-19

    A digital optical conversion module used to convert an analog signal to a computer compatible digital signal including a voltage-to-frequency converter, frequency offset response circuitry, and an electrical-to-optical converter. Also used in conjunction with the digital optical conversion module is an optical link and an interface at the computer for converting the optical signal back to an electrical signal. Suitable for use in hostile environments having high levels of electromagnetic interference, the conversion module retains high resolution of the analog signal while eliminating the potential for errors due to noise and interference. The module can be used to link analog output scientific equipment such as an electrometer used with a mass spectrometer to a computer. 2 figs.

  10. Digital optical conversion module

    DOEpatents

    Kotter, Dale K.; Rankin, Richard A.

    1991-02-26

    A digital optical conversion module used to convert an analog signal to a computer compatible digital signal including a voltage-to-frequency converter, frequency offset response circuitry, and an electrical-to-optical converter. Also used in conjunction with the digital optical conversion module is an optical link and an interface at the computer for converting the optical signal back to an electrical signal. Suitable for use in hostile environments having high levels of electromagnetic interference, the conversion module retains high resolution of the analog signal while eliminating the potential for errors due to noise and interference. The module can be used to link analog output scientific equipment such as an electrometer used with a mass spectrometer to a computer.

  11. Quo vadis: Hydrologic inverse analyses using high-performance computing and a D-Wave quantum annealer

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2017-12-01

    Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.

  12. Astronomy and Computing: A new journal for the astronomical computing community

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto; Budavári, Tamás; Fluke, Christopher; Gray, Norman; Mann, Robert G.; O'Mullane, William; Wicenec, Andreas; Wise, Michael

    2013-02-01

    We introduce Astronomy and Computing, a new journal for the growing population of people working in the domain where astronomy overlaps with computer science and information technology. The journal aims to provide a new communication channel within that community, which is not well served by current journals, and to help secure recognition of its true importance within modern astronomy. In this inaugural editorial, we describe the rationale for creating the journal, outline its scope and ambitions, and seek input from the community in defining in detail how the journal should work towards its high-level goals.

  13. Recent advances in QM/MM free energy calculations using reference potentials☆

    PubMed Central

    Duarte, Fernanda; Amrein, Beat A.; Blaha-Nelson, David; Kamerlin, Shina C.L.

    2015-01-01

    Background Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Scope of review Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. Major conclusions The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. General significance As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. PMID:25038480

  14. Parametric Study of Pulse-Combustor-Driven Ejectors at High-Pressure

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Paxson, Daniel E.; Perkins, Hugh D.

    2015-01-01

    Pulse-combustor configurations developed in recent studies have demonstrated performance levels at high-pressure operating conditions comparable to those observed at atmospheric conditions. However, problems related to the way fuel was being distributed within the pulse combustor were still limiting performance. In the first part of this study, new configurations are investigated computationally aimed at improving the fuel distribution and performance of the pulse-combustor. Subsequent sections investigate the performance of various pulse-combustor driven ejector configurations operating at high pressure conditions, focusing on the effects of fuel equivalence ratio and ejector throat area. The goal is to design pulse-combustor-ejector configurations that maximize pressure gain while achieving a thermal environment acceptable to a turbine, and at the same time maintain acceptable levels of NO(x) emissions and flow non-uniformities. The computations presented here have demonstrated pressure gains of up to 2.8.

  15. User-Defined Data Distributions in High-Level Programming Languages

    NASA Technical Reports Server (NTRS)

    Diaconescu, Roxana E.; Zima, Hans P.

    2006-01-01

    One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.

  16. The use of LANDSAT-1 imagery in mapping and managing soil and range resources in the Sand Hills region of Nebraska

    NASA Technical Reports Server (NTRS)

    Seevers, P. M. (Principal Investigator); Drew, J. V.

    1976-01-01

    The author has identified the following significant results. Evaluation of ERTS-1 imagery for the Sand Hills region of Nebraska has shown that the data can be used to effectively measure several parameters of inventory needs. (1) Vegetative biomass can be estimated with a high degree of confidence using computer compatable tape data. (2) Soils can be mapped to the subgroup level with high altitude aircraft color infrared photography and to the association level with multitemporal ERTS-1 imagery. (3) Water quality in Sand Hills lakes can be estimated utilizing computer compatable tape data. (4) Center pivot irrigation can be inventoried from satellite data and can be monitored regarding site selection and relative success of establishment from high altitude aircraft color infrared photography. (5) ERTS-1 data is of exceptional value in wide-area inventory of natural resource data in the Sand Hills region of Nebraska.

  17. Metagram Software - A New Perspective on the Art of Computation.

    DTIC Science & Technology

    1981-10-01

    numober) Computer Programming Information and Analysis Metagramming Philosophy Intelligence Information Systefs Abstraction & Metasystems Metagranmming...control would also serve well in the analysis of military and political intelligence, and in other areas where highly abstract methods of thought serve...needed in intelligence because several levels of abstraction are involved in a political or military system, because analysis entails a complex interplay

  18. A Comparative Study of Paper-and-Pen versus Computer-Delivered Assessment Modes on Students' Writing Quality: A Singapore Study

    ERIC Educational Resources Information Center

    Cheung, Yin Ling

    2016-01-01

    Much research has been conducted to investigate the quality of writing and high-level revisions in word processing-assisted and pen-and-paper writing modes. Studies that address cognitive aspects, such as experience and comfort with computers, by which students compose essays during writing assessments have remained relatively unexplored. To fill…

  19. Fault Tolerance for VLSI Multicomputers

    DTIC Science & Technology

    1985-08-01

    that consists of hundreds or thousands of VLSI computation nodes interconnected by dedicated links. Some important applications of high-end computers...technology, and intended applications . A proposed fault tolerance scheme combines hardware that performs error detection and system-level protocols for...order to recover from the error and resume correct operation, a valid system state must be restored. A low-overhead, application -transparent error

  20. Toward using games to teach fundamental computer science concepts

    NASA Astrophysics Data System (ADS)

    Edgington, Jeffrey Michael

    Video and computer games have become an important area of study in the field of education. Games have been designed to teach mathematics, physics, raise social awareness, teach history and geography, and train soldiers in the military. Recent work has created computer games for teaching computer programming and understanding basic algorithms. We present an investigation where computer games are used to teach two fundamental computer science concepts: boolean expressions and recursion. The games are intended to teach the concepts and not how to implement them in a programming language. For this investigation, two computer games were created. One is designed to teach basic boolean expressions and operators and the other to teach fundamental concepts of recursion. We describe the design and implementation of both games. We evaluate the effectiveness of these games using before and after surveys. The surveys were designed to ascertain basic understanding, attitudes and beliefs regarding the concepts. The boolean game was evaluated with local high school students and students in a college level introductory computer science course. The recursion game was evaluated with students in a college level introductory computer science course. We present the analysis of the collected survey information for both games. This analysis shows a significant positive change in student attitude towards recursion and modest gains in student learning outcomes for both topics.

  1. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits.

    PubMed

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-12-24

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits.

  2. Framework for architecture-independent run-time reconfigurable applications

    NASA Astrophysics Data System (ADS)

    Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.

    2000-10-01

    Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.

  3. Remote Agent Demonstration

    NASA Technical Reports Server (NTRS)

    Dorais, Gregory A.; Kurien, James; Rajan, Kanna

    1999-01-01

    We describe the computer demonstration of the Remote Agent Experiment (RAX). The Remote Agent is a high-level, model-based, autonomous control agent being validated on the NASA Deep Space 1 spacecraft.

  4. The Nebula Standard Computer Architecture,

    DTIC Science & Technology

    good target for high level languages, the designers also adopted a visibility approach in architecture design that provides more freedom for the hardware implementor while still maintaining software portability. (Author)

  5. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1989-01-01

    Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.

  6. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    NASA Astrophysics Data System (ADS)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  7. Household computer and Internet access: The digital divide in a pediatric clinic population

    PubMed Central

    Carroll, Aaron E.; Rivara, Frederick P.; Ebel, Beth; Zimmerman, Frederick J.; Christakis, Dimitri A.

    2005-01-01

    Past studies have noted a digital divide, or inequality in computer and Internet access related to socioeconomic class. This study sought to measure how many households in a pediatric primary care outpatient clinic had household access to computers and the Internet, and whether this access differed by socio-economic status or other demographic information. We conducted a phone survey of a population-based sample of parents with children ages 0 to 11 years old. Analyses assessed predictors of having home access to a computer, the Internet, and high-speed Internet service. Overall, 88.9% of all households owned a personal computer, and 81.4% of all households had Internet access. Among households with Internet access, 48.3% had high speed Internet at home. There were statistically significant associations between parental income or education and home computer ownership and Internet access. However, the impact of this difference was lessened by the fact that over 60% of families with annual household income of $10,000–$25,000, and nearly 70% of families with only a high-school education had Internet access at home. While income and education remain significant predictors of household computer and internet access, many patients and families at all economic levels have access, and might benefit from health promotion interventions using these modalities. PMID:16779012

  8. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1994-01-01

    Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

  9. What Does Quality Programming Mean for High Achieving Students?

    ERIC Educational Resources Information Center

    Samudzi, Cleo

    2008-01-01

    The Missouri Academy of Science, Mathematics and Computing (Missouri Academy) is a two-year accelerated, early-entrance-to-college, residential school that matches the level, complexity and pace of the curriculum with the readiness and motivation of high achieving high school students. The school is a part of Northwest Missouri State University…

  10. Programming languages and compiler design for realistic quantum hardware.

    PubMed

    Chong, Frederic T; Franklin, Diana; Martonosi, Margaret

    2017-09-13

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  11. Programming languages and compiler design for realistic quantum hardware

    NASA Astrophysics Data System (ADS)

    Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret

    2017-09-01

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  12. An integrated system for land resources supervision based on the IoT and cloud computing

    NASA Astrophysics Data System (ADS)

    Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie

    2017-01-01

    Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.

  13. Changing computing paradigms towards power efficiency.

    PubMed

    Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro

    2014-06-28

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. Structural studies of RNA-protein complexes: A hybrid approach involving hydrodynamics, scattering, and computational methods.

    PubMed

    Patel, Trushar R; Chojnowski, Grzegorz; Astha; Koul, Amit; McKenna, Sean A; Bujnicki, Janusz M

    2017-04-15

    The diverse functional cellular roles played by ribonucleic acids (RNA) have emphasized the need to develop rapid and accurate methodologies to elucidate the relationship between the structure and function of RNA. Structural biology tools such as X-ray crystallography and Nuclear Magnetic Resonance are highly useful methods to obtain atomic-level resolution models of macromolecules. However, both methods have sample, time, and technical limitations that prevent their application to a number of macromolecules of interest. An emerging alternative to high-resolution structural techniques is to employ a hybrid approach that combines low-resolution shape information about macromolecules and their complexes from experimental hydrodynamic (e.g. analytical ultracentrifugation) and solution scattering measurements (e.g., solution X-ray or neutron scattering), with computational modeling to obtain atomic-level models. While promising, scattering methods rely on aggregation-free, monodispersed preparations and therefore the careful development of a quality control pipeline is fundamental to an unbiased and reliable structural determination. This review article describes hydrodynamic techniques that are highly valuable for homogeneity studies, scattering techniques useful to study the low-resolution shape, and strategies for computational modeling to obtain high-resolution 3D structural models of RNAs, proteins, and RNA-protein complexes. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  15. Abstractions for DNA circuit design.

    PubMed

    Lakin, Matthew R; Youssef, Simon; Cardelli, Luca; Phillips, Andrew

    2012-03-07

    DNA strand displacement techniques have been used to implement a broad range of information processing devices, from logic gates, to chemical reaction networks, to architectures for universal computation. Strand displacement techniques enable computational devices to be implemented in DNA without the need for additional components, allowing computation to be programmed solely in terms of nucleotide sequences. A major challenge in the design of strand displacement devices has been to enable rapid analysis of high-level designs while also supporting detailed simulations that include known forms of interference. Another challenge has been to design devices capable of sustaining precise reaction kinetics over long periods, without relying on complex experimental equipment to continually replenish depleted species over time. In this paper, we present a programming language for designing DNA strand displacement devices, which supports progressively increasing levels of molecular detail. The language allows device designs to be programmed using a common syntax and then analysed at varying levels of detail, with or without interference, without needing to modify the program. This allows a trade-off to be made between the level of molecular detail and the computational cost of analysis. We use the language to design a buffered architecture for DNA devices, capable of maintaining precise reaction kinetics for a potentially unbounded period. We test the effectiveness of buffered gates to support long-running computation by designing a DNA strand displacement system capable of sustained oscillations.

  16. Prediction of monthly regional groundwater levels through hybrid soft-computing techniques

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Chang, Li-Chiu; Huang, Chien-Wei; Kao, I.-Feng

    2016-10-01

    Groundwater systems are intrinsically heterogeneous with dynamic temporal-spatial patterns, which cause great difficulty in quantifying their complex processes, while reliable predictions of regional groundwater levels are commonly needed for managing water resources to ensure proper service of water demands within a region. In this study, we proposed a novel and flexible soft-computing technique that could effectively extract the complex high-dimensional input-output patterns of basin-wide groundwater-aquifer systems in an adaptive manner. The soft-computing models combined the Self Organized Map (SOM) and the Nonlinear Autoregressive with Exogenous Inputs (NARX) network for predicting monthly regional groundwater levels based on hydrologic forcing data. The SOM could effectively classify the temporal-spatial patterns of regional groundwater levels, the NARX could accurately predict the mean of regional groundwater levels for adjusting the selected SOM, the Kriging was used to interpolate the predictions of the adjusted SOM into finer grids of locations, and consequently the prediction of a monthly regional groundwater level map could be obtained. The Zhuoshui River basin in Taiwan was the study case, and its monthly data sets collected from 203 groundwater stations, 32 rainfall stations and 6 flow stations during 2000 and 2013 were used for modelling purpose. The results demonstrated that the hybrid SOM-NARX model could reliably and suitably predict monthly basin-wide groundwater levels with high correlations (R2 > 0.9 in both training and testing cases). The proposed methodology presents a milestone in modelling regional environmental issues and offers an insightful and promising way to predict monthly basin-wide groundwater levels, which is beneficial to authorities for sustainable water resources management.

  17. A Software Application for Assessing Readability in the Japanese EFL Context

    ERIC Educational Resources Information Center

    Ozasa, Toshiaki; Weir, George R. S.; Fukui, Masayasu

    2010-01-01

    We have been engaged in developing a readability index and its application software attuned for Japanese EFL learners. The index program, Ozasa-Fukui Year Level Program, Ver. 1.0, was used in developing the readability metric Ozasa-Fukui Year Level Index but tended to assume a high level of computer knowledge in its users. As a result, the…

  18. A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi

    1997-01-01

    A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.

  19. Self-Directed Cooperative Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo; Morris, Robert (Technical Monitor)

    2003-01-01

    The project is concerned with the development of decision-theoretic techniques to optimize the scientific return of planetary rovers. Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We have developed a comprehensive solution to this problem that involves high-level tools to describe a mission; a compiler that maps a mission description and additional probabilistic models of the components of the rover into a Markov decision problem; and algorithms for solving the rover control problem that are sensitive to the limited computational resources and high-level of uncertainty in this domain.

  20. A New Dual-purpose Quality Control Dosimetry Protocol for Diagnostic Reference-level Determination in Computed Tomography.

    PubMed

    Sohrabi, Mehdi; Parsi, Masoumeh; Sina, Sedigheh

    2018-05-17

    A diagnostic reference level is an advisory dose level set by a regulatory authority in a country as an efficient criterion for protection of patients from unwanted medical exposure. In computed tomography, the direct dose measurement and data collection methods are commonly applied for determination of diagnostic reference levels. Recently, a new quality-control-based dose survey method was proposed by the authors to simplify the diagnostic reference-level determination using a retrospective quality control database usually available at a regulatory authority in a country. In line with such a development, a prospective dual-purpose quality control dosimetry protocol is proposed for determination of diagnostic reference levels in a country, which can be simply applied by quality control service providers. This new proposed method was applied to five computed tomography scanners in Shiraz, Iran, and diagnostic reference levels for head, abdomen/pelvis, sinus, chest, and lumbar spine examinations were determined. The results were compared to those obtained by the data collection and quality-control-based dose survey methods, carried out in parallel in this study, and were found to agree well within approximately 6%. This is highly acceptable for quality-control-based methods according to International Atomic Energy Agency tolerance levels (±20%).

  1. The change in critical technologies for computational physics

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1990-01-01

    It is noted that the types of technology required for computational physics are changing as the field matures. Emphasis has shifted from computer technology to algorithm technology and, finally, to visual analysis technology as areas of critical research for this field. High-performance graphical workstations tied to a supercommunicator with high-speed communications along with the development of especially tailored visualization software has enabled analysis of highly complex fluid-dynamics simulations. Particular reference is made here to the development of visual analysis tools at NASA's Numerical Aerodynamics Simulation Facility. The next technology which this field requires is one that would eliminate visual clutter by extracting key features of simulations of physics and technology in order to create displays that clearly portray these key features. Research in the tuning of visual displays to human cognitive abilities is proposed. The immediate transfer of technology to all levels of computers, specifically the inclusion of visualization primitives in basic software developments for all work stations and PCs, is recommended.

  2. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  3. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  4. High performance cellular level agent-based simulation with FLAME for the GPU.

    PubMed

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  5. ExoMol molecular line lists XIX: high-accuracy computed hot line lists for H218O and H217O

    NASA Astrophysics Data System (ADS)

    Polyansky, Oleg L.; Kyuberis, Aleksandra A.; Lodi, Lorenzo; Tennyson, Jonathan; Yurchenko, Sergei N.; Ovsyannikov, Roman I.; Zobov, Nikolai F.

    2017-04-01

    Hot line lists for two isotopologues of water, H218O and H217O, are presented. The calculations employ newly constructed potential energy surfaces (PES), which take advantage of a novel method for using the large set of experimental energy levels for H216O to give high-quality predictions for H218O and H217O. This procedure greatly extends the energy range for which a PES can be accurately determined, allowing an accurate prediction of higher lying energy levels than are currently known from direct laboratory measurements. This PES is combined with a high-accuracy, ab initio dipole moment surface of water in the computation of all energy levels, transition frequencies and associated Einstein A coefficients for states with rotational excitation up to J = 50 and energies up to 30 000 cm-1. The resulting HotWat78 line lists complement the well-used BT2 H216O line list. Full line lists are made available online as Supporting Information and at www.exomol.com.

  6. Learning representation hierarchies by sharing visual features: a computational investigation of Persian character recognition with unsupervised deep learning.

    PubMed

    Sadeghi, Zahra; Testolin, Alberto

    2017-08-01

    In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.

  7. Overview of High-Fidelity Modeling Activities in the Numerical Propulsion System Simulations (NPSS) Project

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    2002-01-01

    A high-fidelity simulation of a commercial turbofan engine has been created as part of the Numerical Propulsion System Simulation Project. The high-fidelity computer simulation utilizes computer models that were developed at NASA Glenn Research Center in cooperation with turbofan engine manufacturers. The average-passage (APNASA) Navier-Stokes based viscous flow computer code is used to simulate the 3D flow in the compressors and turbines of the advanced commercial turbofan engine. The 3D National Combustion Code (NCC) is used to simulate the flow and chemistry in the advanced aircraft combustor. The APNASA turbomachinery code and the NCC combustor code exchange boundary conditions at the interface planes at the combustor inlet and exit. This computer simulation technique can evaluate engine performance at steady operating conditions. The 3D flow models provide detailed knowledge of the airflow within the fan and compressor, the high and low pressure turbines, and the flow and chemistry within the combustor. The models simulate the performance of the engine at operating conditions that include sea level takeoff and the altitude cruise condition.

  8. A multitasking, multisinked, multiprocessor data acquisition front end

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, R.; Au, R.; Molen, A.V.

    1989-10-01

    The authors have developed a generalized data acquisition front end system which is based on MC68020 processors running a commercial real time kernel (rhoSOS), and implemented primarily in a high level language (C). This system has been attached to the back end on-line computing system at NSCL via our high performance ETHERNET protocol. Data may be simultaneously sent to any number of back end systems. Fixed fraction sampling along links to back end computing is also supported. A nonprocedural program generator simplifies the development of experiment specific code.

  9. High-resolution computer-aided moire

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  10. AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin

    2018-01-01

    In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.

  11. Implementing An Image Understanding System Architecture Using Pipe

    NASA Astrophysics Data System (ADS)

    Luck, Randall L.

    1988-03-01

    This paper will describe PIPE and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.

  12. Musculoskeletal symptoms and associated risk factors among office workers with high workload computer use.

    PubMed

    Cho, Chiung-Yu; Hwang, Yea-Shwu; Cherng, Rong-Ju

    2012-09-01

    Although the prevalence of reported discomfort by computer workers is high, the impact of high computer workload on musculoskeletal symptoms remains unclear. The purpose of this study was to investigate the prevalence of musculoskeletal symptoms for office workers with high computer workload. The association between risk factors and musculoskeletal symptoms was also assessed. Two questionnaires were posted on the Web sites of 3 companies and 1 university to recruit computer users in Tainan, Taiwan, during May to July 2009. The 12-item Chinese Health Questionnaire and Musculoskeletal Symptom Questionnaire were chosen as the evaluation tools for musculoskeletal symptoms and its associated risk factors. Chinese Health Questionnaire greater than 5 and computer usage greater than 7 h/d were used to as the cutoff line to divide groups. Descriptive statistics were computed for mean values and frequencies. χ(2) Analysis was used to determine significant differences between groups. A 0.05 level of significance of was used for statistical comparisons. A total of 254 subjects returned the questionnaire, of which 203 met the inclusion criteria. The 3 leading regions of musculoskeletal symptoms among the computer users were the shoulder (73%), neck (71%), and upper back (60%) areas. Similarly, the 3 leading regions of musculoskeletal symptoms among the computer users with high workload were shoulder (77.3%), neck (75.6%), and upper back (63.9%) regions. High psychologic distress was significantly associated with shoulder and upper back complaints (odds ratio [OR], 3.46; OR, 2.24), whereas a high workload was significantly associated with lower back complaints (OR, 1.89). Females were more likely to report shoulder complaints (OR, 2.25). This study found that high psychologic distress was significantly associated with shoulder and upper back pain, whereas high workload was associated with lower back pain. Women tended to have a greater risk of shoulder complaints than men. Developing an intervention that addresses both physical and psychologic problems is important for future studies. Copyright © 2012 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.

  13. African-American males in computer science---Examining the pipeline for clogs

    NASA Astrophysics Data System (ADS)

    Stone, Daryl Bryant

    The literature on African-American males (AAM) begins with a statement to the effect that "Today young Black men are more likely to be killed or sent to prison than to graduate from college." Why are the numbers of African-American male college graduates decreasing? Why are those enrolled in college not majoring in the science, technology, engineering, and mathematics (STEM) disciplines? This research explored why African-American males are not filling the well-recognized industry need for Computer Scientist/Technologists by choosing college tracks to these careers. The literature on STEM disciplines focuses largely on women in STEM, as opposed to minorities, and within minorities, there is a noticeable research gap in addressing the needs and opportunities available to African-American males. The primary goal of this study was therefore to examine the computer science "pipeline" from the African-American male perspective. The method included a "Computer Science Degree Self-Efficacy Scale" be distributed to five groups of African-American male students, to include: (1) fourth graders, (2) eighth graders, (3) eleventh graders, (4) underclass undergraduate computer science majors, and (5) upperclass undergraduate computer science majors. In addition to a 30-question self-efficacy test, subjects from each group were asked to participate in a group discussion about "African-American males in computer science." The audio record of each group meeting provides qualitative data for the study. The hypotheses include the following: (1) There is no significant difference in "Computer Science Degree" self-efficacy between fourth and eighth graders. (2) There is no significant difference in "Computer Science Degree" self-efficacy between eighth and eleventh graders. (3) There is no significant difference in "Computer Science Degree" self-efficacy between eleventh graders and lower-level computer science majors. (4) There is no significant difference in "Computer Science Degree" self-efficacy between lower-level computer science majors and upper-level computer science majors. (5) There is no significant difference in "Computer Science Degree" self-efficacy between each of the five groups of students. Finally, the researcher selected African-American male students attending six primary schools, including the predominately African-American elementary, middle and high school that the researcher attended during his own academic career. Additionally, a racially mixed elementary, middle and high school was selected from the same county in Maryland. Bowie State University provided both the underclass and upperclass computer science majors surveyed in this study. Of the five hypotheses, the sample provided enough evidence to support the claim that there are significant differences in the "Computer Science Degree" self-efficacy between each of the five groups of students. ANOVA analysis by question and total self-efficacy scores provided more results of statistical significance. Additionally, factor analysis and review of the qualitative data provide more insightful results. Overall, the data suggest 'a clog' may exist in the middle school level and students attending racially mixed schools were more confident in their computer, math and science skills. African-American males admit to spending lots of time on social networking websites and emailing, but are 'dis-aware' of the skills and knowledge needed to study in the computing disciplines. The majority of the subjects knew little, if any, AAMs in the 'computing discipline pipeline'. The collegian African-American males, in this study, agree that computer programming is a difficult area and serves as a 'major clog in the pipeline'.

  14. Multi-Level Adaptive Techniques (MLAT) for singular-perturbation problems

    NASA Technical Reports Server (NTRS)

    Brandt, A.

    1978-01-01

    The multilevel (multigrid) adaptive technique, a general strategy of solving continuous problems by cycling between coarser and finer levels of discretization is described. It provides very fast general solvers, together with adaptive, nearly optimal discretization schemes. In the process, boundary layers are automatically either resolved or skipped, depending on a control function which expresses the computational goal. The global error decreases exponentially as a function of the overall computational work, in a uniform rate independent of the magnitude of the singular-perturbation terms. The key is high-order uniformly stable difference equations, and uniformly smoothing relaxation schemes.

  15. Internet Use for Health-Care Information by Subjects With COPD.

    PubMed

    Delgado, Cionéia K; Gazzotti, Mariana R; Santoro, Ilka L; Carvalho, Andrea K; Jardim, José R; Nascimento, Oliver A

    2015-09-01

    Although the internet is an important tool for entertainment, work, learning, shopping, and communication, it is also a possible source for information on health and disease. The aim of this study was to evaluate the proportion of subjects with COPD in São Paulo, Brazil, who use the internet to obtain information about their disease. Subjects (N = 382) with COPD answered a 17-question survey, including information regarding computer use, internet access, and searching for sites on COPD. Our sample was distributed according to the socioeconomic levels of the Brazilian population (low, 17.8%; medium, 66.5%; and high, 15.7%). Most of the subjects in the sample were male (62.6%), with a mean age of 67.0 ± 9.9 y. According to Global Initiative for Chronic Obstructive Lung Disease (GOLD) stages, 74.3% of the subjects were in stage II or III. In addition, 51.6% of the subjects had a computer, 49.7% accessed the internet, and 13.9% used it to search for information about COPD. The internet was predominantly accessed by male (70.3%) and younger (64.6 ± 9.5 y of age) subjects compared with female (29.7%, P = .04) and older (67.5 ± 9.6 y of age, P < .007) subjects. Searching for information about COPD on the internet was associated with having a computer (5.9-fold), Medical Research Council dyspnea level 1 (5.3-fold), and high social class (8.4-fold). The search for information on COPD was not influenced by GOLD staging. A low percentage of subjects with COPD in São Paulo use the internet as a tool to obtain information about their disease. This search is associated with having a computer, low dyspnea score, and high socioeconomic level. Copyright © 2015 by Daedalus Enterprises.

  16. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    NASA Astrophysics Data System (ADS)

    Chen, A.; Pham, L.; Kempler, S.; Theobald, M.; Esfandiari, A.; Campino, J.; Vollmer, B.; Lynnes, C.

    2011-12-01

    Cloud Computing technology has been used to offer high-performance and low-cost computing and storage resources for both scientific problems and business services. Several cloud computing services have been implemented in the commercial arena, e.g. Amazon's EC2 & S3, Microsoft's Azure, and Google App Engine. There are also some research and application programs being launched in academia and governments to utilize Cloud Computing. NASA launched the Nebula Cloud Computing platform in 2008, which is an Infrastructure as a Service (IaaS) to deliver on-demand distributed virtual computers. Nebula users can receive required computing resources as a fully outsourced service. NASA Goddard Earth Science Data and Information Service Center (GES DISC) migrated several GES DISC's applications to the Nebula as a proof of concept, including: a) The Simple, Scalable, Script-based Science Processor for Measurements (S4PM) for processing scientific data; b) the Atmospheric Infrared Sounder (AIRS) data process workflow for processing AIRS raw data; and c) the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI) for online access to, analysis, and visualization of Earth science data. This work aims to evaluate the practicability and adaptability of the Nebula. The initial work focused on the AIRS data process workflow to evaluate the Nebula. The AIRS data process workflow consists of a series of algorithms being used to process raw AIRS level 0 data and output AIRS level 2 geophysical retrievals. Migrating the entire workflow to the Nebula platform is challenging, but practicable. After installing several supporting libraries and the processing code itself, the workflow is able to process AIRS data in a similar fashion to its current (non-cloud) configuration. We compared the performance of processing 2 days of AIRS level 0 data through level 2 using a Nebula virtual computer and a local Linux computer. The result shows that Nebula has significantly better performance than the local machine. Much of the difference was due to newer equipment in the Nebula than the legacy computer, which is suggestive of a potential economic advantage beyond elastic power, i.e., access to up-to-date hardware vs. legacy hardware that must be maintained past its prime to amortize the cost. In addition to a trade study of advantages and challenges of porting complex processing to the cloud, a tutorial was developed to enable further progress in utilizing the Nebula for Earth Science applications and understanding better the potential for Cloud Computing in further data- and computing-intensive Earth Science research. In particular, highly bursty computing such as that experienced in the user-demand-driven Giovanni system may become more tractable in a Cloud environment. Our future work will continue to focus on migrating more GES DISC's applications/instances, e.g. Giovanni instances, to the Nebula platform and making matured migrated applications to be in operation on the Nebula.

  17. Density-Functional Theory with Dispersion-Correcting Potentials for Methane: Bridging the Efficiency and Accuracy Gap between High-Level Wave Function and Classical Molecular Mechanics Methods.

    PubMed

    Torres, Edmanuel; DiLabio, Gino A

    2013-08-13

    Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.

  18. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  19. Highly Parallel Computing Architectures by using Arrays of Quantum-dot Cellular Automata (QCA): Opportunities, Challenges, and Recent Results

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Toomarian, Benny N.

    2000-01-01

    There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA-based architectures for highly parallel and systolic computation of signal/image processing applications, such as FFT and Wavelet and Wlash-Hadamard Transforms.

  20. A comparison of hardware description languages. [describing digital systems structure and behavior to a computer

    NASA Technical Reports Server (NTRS)

    Shiva, S. G.

    1978-01-01

    Several high level languages which evolved over the past few years for describing and simulating the structure and behavior of digital systems, on digital computers are assessed. The characteristics of the four prominent languages (CDL, DDL, AHPL, ISP) are summarized. A criterion for selecting a suitable hardware description language for use in an automatic integrated circuit design environment is provided.

  1. Utilizing Computer and Multimedia Technology in Generating Choreography for the Advanced Dance Student at the High School Level.

    ERIC Educational Resources Information Center

    Griffin, Irma Amado

    This study describes a pilot program utilizing various multimedia computer programs on a MacQuadra 840 AV. The target group consisted of six advanced dance students who participated in the pilot program within the dance curriculum by creating a database of dance movement using video and still photography. The students combined desktop publishing,…

  2. Systems, methods and apparatus for implementation of formal specifications derived from informal requirements

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor); Erickson, John D. (Inventor); Gracinin, Denis (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments an informal specification is translated without human intervention into a formal specification. In some embodiments the formal specification is a process-based specification. In some embodiments, the formal specification is translated into a high-level computer programming language which is further compiled into a set of executable computer instructions.

  3. A Methodological Study Evaluating a Pretutorial Computer-Compiled Instructional Program in High School Physics Instruction Initiated from Student-Teacher Selected Instructional Objectives. Final Report.

    ERIC Educational Resources Information Center

    Leonard, B. Charles; Denton, Jon J.

    A study sought to develop and evaluate an instructional model which utilized the computer to produce individually prescribed instructional guides to account for the idiosyncratic variations among students in physics classes at the secondary school level. The students in the treatment groups were oriented toward the practices of selecting…

  4. Does the Computer-Assisted Remedial Mathematics Program at Kearny High School Lead to Improved Scores on the N.J. Early Warning Test?

    ERIC Educational Resources Information Center

    Schalago-Schirm, Cynthia

    Eighth-grade students in New Jersey take the Early Warning Test (EWT), which involves reading, writing, and mathematics. Students with EWT scores below the state level of competency take a remedial mathematics course that provides students with computer-assisted instruction (2 days per week) as well as regular classroom instruction (3 days per…

  5. Evaluating the Security of Machine Learning Algorithms

    DTIC Science & Technology

    2008-05-20

    Two far-reaching trends in computing have grown in significance in recent years. First, statistical machine learning has entered the mainstream as a...computing applications. The growing intersection of these trends compels us to investigate how well machine learning performs under adversarial conditions... machine learning has a structure that we can use to build secure learning systems. This thesis makes three high-level contributions. First, we develop a

  6. Wild Fire Computer Model Helps Firefighters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canfield, Jesse

    2012-09-04

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  7. Wild Fire Computer Model Helps Firefighters

    ScienceCinema

    Canfield, Jesse

    2018-02-14

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  8. High-performance analysis of filtered semantic graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buluc, Aydin; Fox, Armando; Gilbert, John R.

    2012-01-01

    High performance is a crucial consideration when executing a complex analytic query on a massive semantic graph. In a semantic graph, vertices and edges carry "attributes" of various types. Analytic queries on semantic graphs typically depend on the values of these attributes; thus, the computation must either view the graph through a filter that passes only those individual vertices and edges of interest, or else must first materialize a subgraph or subgraphs consisting of only the vertices and edges of interest. The filtered approach is superior due to its generality, ease of use, and memory efficiency, but may carry amore » performance cost. In the Knowledge Discovery Toolbox (KDT), a Python library for parallel graph computations, the user writes filters in a high-level language, but those filters result in relatively low performance due to the bottleneck of having to call into the Python interpreter for each edge. In this work, we use the Selective Embedded JIT Specialization (SEJITS) approach to automatically translate filters defined by programmers into a lower-level efficiency language, bypassing the upcall into Python. We evaluate our approach by comparing it with the high-performance C++ /MPI Combinatorial BLAS engine, and show that the productivity gained by using a high-level filtering language comes without sacrificing performance.« less

  9. Routine human-competitive machine intelligence by means of genetic programming

    NASA Astrophysics Data System (ADS)

    Koza, John R.; Streeter, Matthew J.; Keane, Martin

    2004-01-01

    Genetic programming is a systematic method for getting computers to automatically solve a problem. Genetic programming starts from a high-level statement of what needs to be done and automatically creates a computer program to solve the problem. The paper demonstrates that genetic programming (1) now routinely delivers high-return human-competitive machine intelligence; (2) is an automated invention machine; (3) can automatically create a general solution to a problem in the form of a parameterized topology; and (4) has delivered a progression of qualitatively more substantial results in synchrony with five approximately order-of-magnitude increases in the expenditure of computer time. Recent results involving the automatic synthesis of the topology and sizing of analog electrical circuits and controllers demonstrate these points.

  10. Development and experimental assessment of a numerical modelling code to aid the design of profile extrusion cooling tools

    NASA Astrophysics Data System (ADS)

    Carneiro, O. S.; Rajkumar, A.; Fernandes, C.; Ferrás, L. L.; Habla, F.; Nóbrega, J. M.

    2017-10-01

    On the extrusion of thermoplastic profiles, upon the forming stage that takes place in the extrusion die, the profile must be cooled in a metallic calibrator. This stage must be done at a high rate, to assure increased productivity, but avoiding the development of high temperature gradients, in order to minimize the level of induced thermal residual stresses. In this work, we present a new coupled numerical solver, developed in the framework of the OpenFOAM® computational library, that computes the temperature distribution in both domains simultaneously (metallic calibrator and plastic profile), whose implementation aimed the minimization of the computational time. The new solver was experimentally assessed with an industrial case study.

  11. Brain Swelling and Loss of Gray and White Matter Differentiation in Human Postmortem Cases by Computed Tomography.

    PubMed

    Shirota, Go; Gonoi, Wataru; Ishida, Masanori; Okuma, Hidemi; Shintani, Yukako; Abe, Hiroyuki; Takazawa, Yutaka; Ikemura, Masako; Fukayama, Masashi; Ohtomo, Kuni

    2015-01-01

    The purpose of this study was to evaluate the brain by postmortem computed tomography (PMCT) versus antemortem computed tomography (AMCT) using brains from the same patients. We studied 36 nontraumatic subjects who underwent AMCT, PMCT, and pathological autopsy in our hospital between April 2009 and December 2013. PMCT was performed within 20 h after death, followed by pathological autopsy including the brain. Autopsy confirmed the absence of intracranial disorders that might be related to the cause of death or might affect measurements in our study. Width of the third ventricle, width of the central sulcus, and attenuation in gray matter (GM) and white matter (WM) from the same area of the basal ganglia, centrum semiovale, and high convexity were statistically compared between AMCT and PMCT. Both the width of the third ventricle and the central sulcus were significantly shorter in PMCT than in AMCT (P < 0.0001). GM attenuation increased after death at the level of the centrum semiovale and high convexity, but the differences were not statistically significant considering the differences in attenuation among the different computed tomography scanners. WM attenuation significantly increased after death at all levels (P<0.0001). The differences were larger than the differences in scanners. GM/WM ratio of attenuation was significantly lower by PMCT than by AMCT at all levels (P<0.0001). PMCT showed an increase in WM attenuation, loss of GM-WM differentiation, and brain swelling, evidenced by a decrease in the size of ventricles and sulci.

  12. The IUPAC Database of Rotational-Vibrational Energy Levels and Transitions of Water Isotopologues from Experiment and Theory

    NASA Astrophysics Data System (ADS)

    Császár, Attila G.; Furtenbacher, T.; Tennyson, Jonathan; Bernath, Peter F.; Brown, Linda R.; Campargue, Alain; Daumont, Ludovic; Gamache, Robert R.; Hodges, Joseph T.; Naumenko, Olga V.; Polyansky, Oleg L.; Rothman, Laurence S.; Vandaele, Ann Carine; Zobov, Nikolai F.

    2014-06-01

    The results of an IUPAC Task Group formed in 2004 on "A Database of Water Transitions from Experiment and Theory" (Project No. 2004-035-1-100) are presented. Energy levels and recommended labels involving exact and approximate quantum numbers for the main isotopologues of water in the gas phase, H216O, H218O, H217O, HD16O, HD18O, HD17O, D216O, D218O, and D217O, are determined from measured transition wavenumbers. The transition wavenumbers and energy levels are validated using the MARVEL (measured active rotational-vibrational energy levels) approach and first-principles nuclear motion computations. The extensive data, e.g., more than 200,000 transitions have been handled for H216O, including lines and levels that are required for analysis and synthesis of spectra, thermochemical applications, the construction of theoretical models, and the removal of spectral contamination by ubiquitous water lines. These datasets can also be used to assess where measurements are lacking for each isotopologue and to provide accurate frequencies for many yet-to-be measured transitions. The lack of high-quality frequency calibration standards in the near infrared is identified as an issue that has hindered the determination of high-accuracy energy levels at higher frequencies. The generation of spectra using the MARVEL energy levels combined with transition intensities computed using high accuracy ab initio dipole moment surfaces are discussed.

  13. Impact of Primary Spherical Aberration, Spatial Frequency and Stiles Crawford Apodization on Wavefront determined Refractive Error: A Computational Study

    PubMed Central

    Xu, Renfeng; Bradley, Arthur; Thibos, Larry N.

    2013-01-01

    Purpose We tested the hypothesis that pupil apodization is the basis for central pupil bias of spherical refractions in eyes with spherical aberration. Methods We employed Fourier computational optics in which we vary spherical aberration levels, pupil size, and pupil apodization (Stiles Crawford Effect) within the pupil function, from which point spread functions and optical transfer functions were computed. Through-focus analysis determined the refractive correction that optimized retinal image quality. Results For a large pupil (7 mm), as spherical aberration levels increase, refractions that optimize the visual Strehl ratio mirror refractions that maximize high spatial frequency modulation in the image and both focus a near paraxial region of the pupil. These refractions are not affected by Stiles Crawford Effect apodization. Refractions that optimize low spatial frequency modulation come close to minimizing wavefront RMS, and vary with level of spherical aberration and Stiles Crawford Effect. In the presence of significant levels of spherical aberration (e.g. C40 = 0.4 µm, 7mm pupil), low spatial frequency refractions can induce −0.7D myopic shift compared to high SF refraction, and refractions that maximize image contrast of a 3 cycle per degree square-wave grating can cause −0.75D myopic drift relative to refractions that maximize image sharpness. Discussion Because of small depth of focus associated with high spatial frequency stimuli, the large change in dioptric power across the pupil caused by spherical aberration limits the effective aperture contributing to the image of high spatial frequencies. Thus, when imaging high spatial frequencies, spherical aberration effectively induces an annular aperture defining that portion of the pupil contributing to a well-focused image. As spherical focus is manipulated during the refraction procedure, the dimensions of the annular aperture change. Image quality is maximized when the inner radius of the induced annulus falls to zero, thus defining a circular near paraxial region of the pupil that determines refraction outcome. PMID:23683093

  14. Small Molecules-Big Data.

    PubMed

    Császár, Attila G; Furtenbacher, Tibor; Árendás, Péter

    2016-11-17

    Quantum mechanics builds large-scale graphs (networks): the vertices are the discrete energy levels the quantum system possesses, and the edges are the (quantum-mechanically allowed) transitions. Parts of the complete quantum mechanical networks can be probed experimentally via high-resolution, energy-resolved spectroscopic techniques. The complete rovibronic line list information for a given molecule can only be obtained through sophisticated quantum-chemical computations. Experiments as well as computations yield what we call spectroscopic networks (SN). First-principles SNs of even small, three to five atomic molecules can be huge, qualifying for the big data description. Besides helping to interpret high-resolution spectra, the network-theoretical view offers several ideas for improving the accuracy and robustness of the increasingly important information systems containing line-by-line spectroscopic data. For example, the smallest number of measurements necessary to perform to obtain the complete list of energy levels is given by the minimum-weight spanning tree of the SN and network clustering studies may call attention to "weakest links" of a spectroscopic database. A present-day application of spectroscopic networks is within the MARVEL (Measured Active Rotational-Vibrational Energy Levels) approach, whereby the transitions information on a measured SN is turned into experimental energy levels via a weighted linear least-squares refinement. MARVEL has been used successfully for 15 molecules and allowed to validate most of the transitions measured and come up with energy levels with well-defined and realistic uncertainties. Accurate knowledge of the energy levels with computed transition intensities allows the realistic prediction of spectra under many different circumstances, e.g., for widely different temperatures. Detailed knowledge of the energy level structure of a molecule coming from a MARVEL analysis is important for a considerable number of modeling efforts in chemistry, physics, and engineering.

  15. Assessment of the Role of Different Imaging Modalities with Emphasis on Fdg Pet/Ct in the Management of Well Differentiated Thyroid Cancer (WDTC).

    PubMed

    Kendi A, Tuba Karagulle; Mudalegundi, Shwetha; Switchenko, Jeffrey; Lee, Daniel; Halkar, Raghuveer; Chen, Amy Y

    2016-01-01

    Positron emission tomography/computed tomography is suggested to have a role in detection of iodine negative recurrence in well differentiated thyroid cancer. The aim of this study is to identify role of different imaging modalities in the management of well differentiated thyroid cancer. We reviewed 900 well differentiated thyroid cancer patients after post-thyroidectomy who underwent recombinant human thyroid stimulating hormone stimulated Sodium Iodide I 131 imaging. Out of 900 patients, 74 had positron emission tomography/computed tomography. Multivariate analysis was performed by controlling positron emission tomography/computed tomography, Sodium Iodide I 131 scan, neck ultrasonography, age, sex, primary tumor size, stage, histology, thyroglobulin. Patients were grouped according to results of Sodium Iodide I 131 scan and positron emission tomography/computed tomography. Positron emission tomography/computed tomography was positive in 23 of 74 patients. The sensitivity for positron emission tomography was 11/11(100%), the specificity was 51/63 (81.0%), the positive predictive value was 11/23 (47.8%), and the negative predictive value was 51/51 (100%). The sensitivity for the neck ultrasonography was 4/8 (50%), the specificity was 53/60 (88.3%), positive predictive value was 4/11 (36.4%), and negative predictive value was 53/57 (93.0%). 50% of patients who had Sodium Iodide I 131 negative scan and positive positron emission tomography/computed tomography had a change in management. Thirty-six percent with positive neck ultrasonography had a change in management. Out of 11 recurrences, 6 had distant metastatic disease, and 5/11 had regional nodal disease. Neck ultrasonography showed nodal metastasis in 4/5 (80%). Positron emission tomography/computed tomography altered management in the presence of a high thyroglobulin level and a negative Sodium Iodide I 131 scan. Neck ultrasonography should be the first line of imaging with rising thyroglobulin levels. Positron emission tomography/computed tomography should be considered for cases with high thyroglobulin levels and normal neck ultrasonography to look for distant metastatic disease.

  16. Critical thinking traits of top-tier experts and implications for computer science education

    NASA Astrophysics Data System (ADS)

    Bushey, Dean E.

    A documented shortage of technical leadership and top-tier performers in computer science jeopardizes the technological edge, security, and economic well-being of the nation. The 2005 President's Information and Technology Advisory Committee (PITAC) Report on competitiveness in computational sciences highlights the major impact of science, technology, and innovation in keeping America competitive in the global marketplace. It stresses the fact that the supply of science, technology, and engineering experts is at the core of America's technological edge, national competitiveness and security. However, recent data shows that both undergraduate and postgraduate production of computer scientists is falling. The decline is "a quiet crisis building in the United States," a crisis that, if allowed to continue unchecked, could endanger America's well-being and preeminence among the world's nations. Past research on expert performance has shown that the cognitive traits of critical thinking, creativity, and problem solving possessed by top-tier performers can be identified, observed and measured. The studies show that the identified attributes are applicable across many domains and disciplines. Companies have begun to realize that cognitive skills are important for high-level performance and are reevaluating the traditional academic standards they have used to predict success for their top-tier performers in computer science. Previous research in the computer science field has focused either on programming skills of its experts or has attempted to predict the academic success of students at the undergraduate level. This study, on the other hand, examines the critical-thinking skills found among experts in the computer science field in order to explore the questions, "What cognitive skills do outstanding performers possess that make them successful?" and "How do currently used measures of academic performance correlate to critical-thinking skills among students?" The results of this study suggest a need to examine how critical-thinking abilities are learned in the undergraduate computer science curriculum and the need to foster these abilities in order to produce the high-level, critical-thinking professionals necessary to fill the growing need for these experts. Due to the fact that current measures of academic performance do not adequately depict students' cognitive abilities, assessment of these skills must be incorporated into existing curricula.

  17. High School Students' Social Media Usage Habits

    ERIC Educational Resources Information Center

    Tezci, Erdogan; Içen, Mustafa

    2017-01-01

    Social media which is an important product of Computer and Internet Technologies has a growing usage level day by day. Increasing social media usage level gives opportunity for new software developments and making investments in this area. From this aspect, therefore, social media has not only economic function but also make persons participate in…

  18. Tablet Computer Literacy Levels of the Physical Education and Sports Department Students

    ERIC Educational Resources Information Center

    Hergüner, Gülten

    2016-01-01

    Education systems are being affected in parallel by newly emerging hardware and new developments occurring in technology daily. Tablet usage especially is becoming ubiquitous in the teaching-learning processes in recent years. Therefore, using the tablets effectively, managing them and having a high level of tablet literacy play an important role…

  19. Universal computer control system (UCCS) for space telerobots

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Szakaly, Zoltan

    1987-01-01

    A universal computer control system (UCCS) is under development for all motor elements of a space telerobot. The basic hardware architecture and software design of UCCS are described, together with the rich motor sensing, control, and self-test capabilities of this all-computerized motor control system. UCCS is integrated into a multibus computer environment with direct interface to higher level control processors, uses pulsewidth multiplier power amplifiers, and one unit can control up to sixteen different motors simultaneously at a high I/O rate. UCCS performance capabilities are illustrated by a few data.

  20. A SCILAB Program for Computing General-Relativistic Models of Rotating Neutron Stars by Implementing Hartle's Perturbation Method

    NASA Astrophysics Data System (ADS)

    Papasotiriou, P. J.; Geroyannis, V. S.

    We implement Hartle's perturbation method to the computation of relativistic rigidly rotating neutron star models. The program has been written in SCILAB (© INRIA ENPC), a matrix-oriented high-level programming language. The numerical method is described in very detail and is applied to many models in slow or fast rotation. We show that, although the method is perturbative, it gives accurate results for all practical purposes and it should prove an efficient tool for computing rapidly rotating pulsars.

  1. Measurement of fault latency in a digital avionic mini processor, part 2

    NASA Technical Reports Server (NTRS)

    Mcgough, J.; Swern, F.

    1983-01-01

    The results of fault injection experiments utilizing a gate-level emulation of the central processor unit of the Bendix BDX-930 digital computer are described. Several earlier programs were reprogrammed, expanding the instruction set to capitalize on the full power of the BDX-930 computer. As a final demonstration of fault coverage an extensive, 3-axis, high performance flght control computation was added. The stages in the development of a CPU self-test program emphasizing the relationship between fault coverage, speed, and quantity of instructions were demonstrated.

  2. A synchronized computational architecture for generalized bilateral control of robot arms

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Szakaly, Zoltan

    1987-01-01

    This paper describes a computational architecture for an interconnected high speed distributed computing system for generalized bilateral control of robot arms. The key method of the architecture is the use of fully synchronized, interrupt driven software. Since an objective of the development is to utilize the processing resources efficiently, the synchronization is done in the hardware level to reduce system software overhead. The architecture also achieves a balaced load on the communication channel. The paper also describes some architectural relations to trading or sharing manual and automatic control.

  3. Segmentation of cortical bone using fast level sets

    NASA Astrophysics Data System (ADS)

    Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo

    2017-02-01

    Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.

  4. High Resolution Nature Runs and the Big Data Challenge

    NASA Technical Reports Server (NTRS)

    Webster, W. Phillip; Duffy, Daniel Q.

    2015-01-01

    NASA's Global Modeling and Assimilation Office at Goddard Space Flight Center is undertaking a series of very computationally intensive Nature Runs and a downscaled reanalysis. The nature runs use the GEOS-5 as an Atmospheric General Circulation Model (AGCM) while the reanalysis uses the GEOS-5 in Data Assimilation mode. This paper will present computational challenges from three runs, two of which are AGCM and one is downscaled reanalysis using the full DAS. The nature runs will be completed at two surface grid resolutions, 7 and 3 kilometers and 72 vertical levels. The 7 km run spanned 2 years (2005-2006) and produced 4 PB of data while the 3 km run will span one year and generate 4 BP of data. The downscaled reanalysis (MERRA-II Modern-Era Reanalysis for Research and Applications) will cover 15 years and generate 1 PB of data. Our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS), a specialization of the concept of business process-as-a-service that is an evolving extension of IaaS, PaaS, and SaaS enabled by cloud computing. In this presentation, we will describe two projects that demonstrate this shift. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS. MERRA/AS enables MapReduce analytics over MERRA reanalysis data collection by bringing together the high-performance computing, scalable data management, and a domain-specific climate data services API. NASA's High-Performance Science Cloud (HPSC) is an example of the type of compute-storage fabric required to support CAaaS. The HPSC comprises a high speed Infinib and network, high performance file systems and object storage, and a virtual system environments specific for data intensive, science applications. These technologies are providing a new tier in the data and analytic services stack that helps connect earthbound, enterprise-level data and computational resources to new customers and new mobility-driven applications and modes of work. In our experience, CAaaS lowers the barriers and risk to organizational change, fosters innovation and experimentation, and provides the agility required to meet our customers' increasing and changing needs

  5. High Performance Computing for Modeling Wind Farms and Their Impact

    NASA Astrophysics Data System (ADS)

    Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.

    2016-12-01

    As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.

  6. [AERA. Dream machines and computing practices at the Mathematical Center].

    PubMed

    Alberts, Gerard; De Beer, Huub T

    2008-01-01

    Dream machines may be just as effective as the ones materialised. Their symbolic thrust can be quite powerful. The Amsterdam 'Mathematisch Centrum' (Mathematical Center), founded February 11, 1946, created a Computing Department in an effort to realise its goal of serving society. When Aad van Wijngaarden was appointed as head of the Computing Department, however, he claimed space for scientific research and computer construction, next to computing as a service. Still, the computing service following the five stage style of Hartree's numerical analysis remained a dominant characteristic of the work of the Computing Department. The high level of ambition held by Aad van Wijngaarden lead to ever renewed projections of big automatic computers, symbolised by the never-built AERA. Even a machine that was actually constructed, the ARRA which followed A.D. Booth's design of the ARC, never made it into real operation. It did serve Van Wijngaarden to bluff his way into the computer age by midsummer 1952. Not until January 1954 did the computing department have a working stored program computer, which for reasons of policy went under the same name: ARRA. After just one other machine, the ARMAC, had been produced, a separate company, Electrologica, was set up for the manufacture of computers, which produced the rather successful X1 computer. The combination of ambition and absence of a working machine lead to a high level of work on programming, way beyond the usual ideas of libraries of subroutines. Edsger W. Dijkstra in particular led the way to an emphasis on the duties of the programmer within the pattern of numerical analysis. Programs generating programs, known elsewhere as autocoding systems, were at the 'Mathematisch Centrum' called 'superprograms'. Practical examples were usually called a 'complex', in Dutch, where in English one might say 'system'. Historically, this is where software begins. Dekker's matrix complex, Dijkstra's interrupt system, Dijkstra and Zonneveld's ALGOL compiler--which for housekeeping contained 'the complex'--were actual examples of such super programs. In 1960 this compiler gave the Mathematical Center a leading edge in the early development of software.

  7. Uncover the Cloud for Geospatial Sciences and Applications to Adopt Cloud Computing

    NASA Astrophysics Data System (ADS)

    Yang, C.; Huang, Q.; Xia, J.; Liu, K.; Li, J.; Xu, C.; Sun, M.; Bambacus, M.; Xu, Y.; Fay, D.

    2012-12-01

    Cloud computing is emerging as the future infrastructure for providing computing resources to support and enable scientific research, engineering development, and application construction, as well as work force education. On the other hand, there is a lot of doubt about the readiness of cloud computing to support a variety of scientific research, development and educations. This research is a project funded by NASA SMD to investigate through holistic studies how ready is the cloud computing to support geosciences. Four applications with different computing characteristics including data, computing, concurrent, and spatiotemporal intensities are taken to test the readiness of cloud computing to support geosciences. Three popular and representative cloud platforms including Amazon EC2, Microsoft Azure, and NASA Nebula as well as a traditional cluster are utilized in the study. Results illustrates that cloud is ready to some degree but more research needs to be done to fully implemented the cloud benefit as advertised by many vendors and defined by NIST. Specifically, 1) most cloud platform could help stand up new computing instances, a new computer, in a few minutes as envisioned, therefore, is ready to support most computing needs in an on demand fashion; 2) the load balance and elasticity, a defining characteristic, is ready in some cloud platforms, such as Amazon EC2, to support bigger jobs, e.g., needs response in minutes, while some are not ready to support the elasticity and load balance well. All cloud platform needs further research and development to support real time application at subminute level; 3) the user interface and functionality of cloud platforms vary a lot and some of them are very professional and well supported/documented, such as Amazon EC2, some of them needs significant improvement for the general public to adopt cloud computing without professional training or knowledge about computing infrastructure; 4) the security is a big concern in cloud computing platform, with the sharing spirit of cloud computing, it is very hard to ensure higher level security, except a private cloud is built for a specific organization without public access, public cloud platform does not support FISMA medium level yet and may never be able to support FISMA high level; 5) HPC jobs needs of cloud computing is not well supported and only Amazon EC2 supports this well. The research is being taken by NASA and other agencies to consider cloud computing adoption. We hope the publication of the research would also benefit the public to adopt cloud computing.

  8. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.

  9. Computer and internet use by persons after traumatic spinal cord injury.

    PubMed

    Goodman, Naomi; Jette, Alan M; Houlihan, Bethlyn; Williams, Steve

    2008-08-01

    To determine whether computer and internet use by persons post spinal cord injury (SCI) is sufficiently prevalent and broad-based to consider using this technology as a long-term treatment modality for patients who have sustained SCI. A multicenter cohort study. Twenty-six past and current U.S. regional Model Spinal Cord Injury Systems. Patients with traumatic SCI (N=2926) with follow-up interviews between 2004 and 2006, conducted at 1 or 5 years postinjury. Not applicable. Results revealed that 69.2% of participants with SCI used a computer; 94.2% of computer users accessed the internet. Among computer users, 19.1% used assistive devices for computer access. Of the internet users, 68.6% went online 5 to 7 days a week. The most frequent use for internet was e-mail (90.5%) and shopping sites (65.8%), followed by health sites (61.1%). We found no statistically significant difference in computer use by sex or level of neurologic injury, and no difference in internet use by level of neurologic injury. Computer and internet access differed significantly by age, with use decreasing as age group increased. The highest computer and internet access rates were seen among participants injured before the age of 18. Computer and internet use varied by race: 76% of white compared with 46% of black subjects were computer users (P<.001), and 95.3% of white respondents who used computers used the internet, compared with 87.6% of black respondents (P<.001). Internet use increased with education level (P<.001): eighty-six percent of participants who did not graduate from high school or receive a degree used the internet, while over 97% of those with a college or associate's degree did. While the internet holds considerable potential as a long-term treatment modality after SCI, limited access to the internet by those who are black, those injured after age 18, and those with less education does reduce its usefulness in the short term for these subgroups.

  10. State-of-the-art in Heterogeneous Computing

    DOE PAGES

    Brodtkorb, Andre R.; Dyken, Christopher; Hagen, Trond R.; ...

    2010-01-01

    Node level heterogeneous architectures have become attractive during the last decade for several reasons: compared to traditional symmetric CPUs, they offer high peak performance and are energy and/or cost efficient. With the increase of fine-grained parallelism in high-performance computing, as well as the introduction of parallelism in workstations, there is an acute need for a good overview and understanding of these architectures. We give an overview of the state-of-the-art in heterogeneous computing, focusing on three commonly found architectures: the Cell Broadband Engine Architecture, graphics processing units (GPUs), and field programmable gate arrays (FPGAs). We present a review of hardware, availablemore » software tools, and an overview of state-of-the-art techniques and algorithms. Furthermore, we present a qualitative and quantitative comparison of the architectures, and give our view on the future of heterogeneous computing.« less

  11. A tale of three bio-inspired computational approaches

    NASA Astrophysics Data System (ADS)

    Schaffer, J. David

    2014-05-01

    I will provide a high level walk-through for three computational approaches derived from Nature. First, evolutionary computation implements what we may call the "mother of all adaptive processes." Some variants on the basic algorithms will be sketched and some lessons I have gleaned from three decades of working with EC will be covered. Then neural networks, computational approaches that have long been studied as possible ways to make "thinking machines", an old dream of man's, and based upon the only known existing example of intelligence. Then, a little overview of attempts to combine these two approaches that some hope will allow us to evolve machines we could never hand-craft. Finally, I will touch on artificial immune systems, Nature's highly sophisticated defense mechanism, that has emerged in two major stages, the innate and the adaptive immune systems. This technology is finding applications in the cyber security world.

  12. Assessing the Integration of Computational Modeling and ASU Modeling Instruction in the High School Physics Classroom

    NASA Astrophysics Data System (ADS)

    Aiken, John; Schatz, Michael; Burk, John; Caballero, Marcos; Thoms, Brian

    2012-03-01

    We describe the assessment of computational modeling in a ninth grade classroom in the context of the Arizona Modeling Instruction physics curriculum. Using a high-level programming environment (VPython), students develop computational models to predict the motion of objects under a variety of physical situations (e.g., constant net force), to simulate real world phenomenon (e.g., car crash), and to visualize abstract quantities (e.g., acceleration). The impact of teaching computation is evaluated through a proctored assignment that asks the students to complete a provided program to represent the correct motion. Using questions isomorphic to the Force Concept Inventory we gauge students understanding of force in relation to the simulation. The students are given an open ended essay question that asks them to explain the steps they would use to model a physical situation. We also investigate the attitudes and prior experiences of each student using the Computation Modeling in Physics Attitudinal Student Survey (COMPASS) developed at Georgia Tech as well as a prior computational experiences survey.

  13. Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob

    2003-01-01

    The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.

  14. An Excel sheet for inferring children's number-knower levels from give-N data.

    PubMed

    Negen, James; Sarnecka, Barbara W; Lee, Michael D

    2012-03-01

    Number-knower levels are a series of stages of number concept development in early childhood. A child's number-knower level is typically assessed using the give-N task. Although the task procedure has been highly refined, the standard ways of analyzing give-N data remain somewhat crude. Lee and Sarnecka (Cogn Sci 34:51-67, 2010, in press) have developed a Bayesian model of children's performance on the give-N task that allows knower level to be inferred in a more principled way. However, this model requires considerable expertise and computational effort to implement and apply to data. Here, we present an approximation to the model's inference that can be computed with Microsoft Excel. We demonstrate the accuracy of the approximation and provide instructions for its use. This makes the powerful inferential capabilities of the Bayesian model accessible to developmental researchers interested in estimating knower levels from give-N data.

  15. Writing and compiling code into biochemistry.

    PubMed

    Shea, Adam; Fett, Brian; Riedel, Marc D; Parhi, Keshab

    2010-01-01

    This paper presents a methodology for translating iterative arithmetic computation, specified as high-level programming constructs, into biochemical reactions. From an input/output specification, we generate biochemical reactions that produce output quantities of proteins as a function of input quantities performing operations such as addition, subtraction, and scalar multiplication. Iterative constructs such as "while" loops and "for" loops are implemented by transferring quantities between protein types, based on a clocking mechanism. Synthesis first is performed at a conceptual level, in terms of abstract biochemical reactions - a task analogous to high-level program compilation. Then the results are mapped onto specific biochemical reactions selected from libraries - a task analogous to machine language compilation. We demonstrate our approach through the compilation of a variety of standard iterative functions: multiplication, exponentiation, discrete logarithms, raising to a power, and linear transforms on time series. The designs are validated through transient stochastic simulation of the chemical kinetics. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.

  16. Wavelet energy-guided level set-based active contour: a segmentation method to segment highly similar regions.

    PubMed

    Achuthan, Anusha; Rajeswari, Mandava; Ramachandram, Dhanesh; Aziz, Mohd Ezane; Shuaib, Ibrahim Lutfi

    2010-07-01

    This paper introduces an approach to perform segmentation of regions in computed tomography (CT) images that exhibit intra-region intensity variations and at the same time have similar intensity distributions with surrounding/adjacent regions. In this work, we adapt a feature computed from wavelet transform called wavelet energy to represent the region information. The wavelet energy is embedded into a level set model to formulate the segmentation model called wavelet energy-guided level set-based active contour (WELSAC). The WELSAC model is evaluated using several synthetic and CT images focusing on tumour cases, which contain regions demonstrating the characteristics of intra-region intensity variations and having high similarity in intensity distributions with the adjacent regions. The obtained results show that the proposed WELSAC model is able to segment regions of interest in close correspondence with the manual delineation provided by the medical experts and to provide a solution for tumour detection. Copyright 2010 Elsevier Ltd. All rights reserved.

  17. Design and implementation of highly parallel pipelined VLSI systems

    NASA Astrophysics Data System (ADS)

    Delange, Alphonsus Anthonius Jozef

    A methodology and its realization as a prototype CAD (Computer Aided Design) system for the design and analysis of complex multiprocessor systems is presented. The design is an iterative process in which the behavioral specifications of the system components are refined into structural descriptions consisting of interconnections and lower level components etc. A model for the representation and analysis of multiprocessor systems at several levels of abstraction and an implementation of a CAD system based on this model are described. A high level design language, an object oriented development kit for tool design, a design data management system, and design and analysis tools such as a high level simulator and graphics design interface which are integrated into the prototype system and graphics interface are described. Procedures for the synthesis of semiregular processor arrays, and to compute the switching of input/output signals, memory management and control of processor array, and sequencing and segmentation of input/output data streams due to partitioning and clustering of the processor array during the subsequent synthesis steps, are described. The architecture and control of a parallel system is designed and each component mapped to a module or module generator in a symbolic layout library, compacted for design rules of VLSI (Very Large Scale Integration) technology. An example of the design of a processor that is a useful building block for highly parallel pipelined systems in the signal/image processing domains is given.

  18. Toward High-Level Theoretical Studies of Large Biodiesel Molecules: An ONIOM [QCISD(T)/CBS:DFT] Study of the Reactions between Unsaturated Methyl Esters (C nH2 n-1COOCH3) and Hydrogen Radical.

    PubMed

    Zhang, Lidong; Meng, Qinghui; Chi, Yicheng; Zhang, Peng

    2018-05-31

    A two-layer ONIOM[QCISD(T)/CBS:DFT] method was proposed for the high-level single-point energy calculations of large biodiesel molecules and was validated for the hydrogen abstraction reactions of unsaturated methyl esters that are important components of real biodiesel. The reactions under investigation include all the reactions on the potential energy surface of C n H 2 n-1 COOCH 3 ( n = 2-5, 17) + H, including the hydrogen abstraction, the hydrogen addition, the isomerization (intramolecular hydrogen shift), and the β-scission reactions. By virtue of the introduced concept of chemically active center, a unified specification of chemically active portion for the ONIOM (ONIOM = our own n-layered integrated molecular orbital and molecular mechanics) method was proposed to account for the additional influence of C═C double bond. The predicted energy barriers and heats of reaction by using the ONIOM method are in very good agreement with those obtained by using the widely accepted high-level QCISD(T)/CBS theory, as verified by the computational deviations being less than 0.15 kcal/mol, for almost all the reaction pathways under investigation. The method provides a computationally accurate and affordable approach to combustion chemists for high-level theoretical chemical kinetics of large biodiesel molecules.

  19. Computer modelling as a tool for the exposure assessment of operators using faulty agricultural pesticide spraying equipment.

    PubMed

    Bańkowski, Robert; Wiadrowska, Bozena; Beresińska, Martyna; Ludwicki, Jan K; Noworyta-Głowacka, Justyna; Godyń, Artur; Doruchowski, Grzegorz; Hołownicki, Ryszard

    2013-01-01

    Faulty but still operating agricultural pesticide sprayers may pose an unacceptable health risk for operators. The computerized models designed to calculate exposure and risk for pesticide sprayers used as an aid in the evaluation and further authorisation of plant protection products may be applied also to assess a health risk for operators when faulty sprayers are used. To evaluate the impact of different exposure scenarios on the health risk for the operators using faulty agricultural spraying equipment by means of computer modelling. The exposure modelling was performed for 15 pesticides (5 insecticides, 7 fungicides and 3 herbicides). The critical parameter, i.e. toxicological end-point, on which the risk assessment was based was the no observable adverse effect level (NOAEL). This enabled risk to be estimated under various exposure conditions such as pesticide concentration in the plant protection product and type of the sprayed crop as well as the number of treatments. Computer modelling was based on the UK POEM model including determination of the acceptable operator exposure level (AOEL). Thus the degree of operator exposure could be defined during pesticide treatment whether or not personal protection equipment had been employed by individuals. Data used for computer modelling was obtained from simulated, pesticide substitute treatments using variously damaged knapsack sprayers. These substitute preparations consisted of markers that allowed computer simulations to be made, analogous to real-life exposure situations, in a dose dependent fashion. Exposures were estimated according to operator dosimetry exposure under 'field' conditions for low level, medium and high target field crops. The exposure modelling in the high target field crops demonstrated exceedance of the AOEL in all simulated treatment cases (100%) using damaged sprayers irrespective of the type of damage or if individual protective measures had been adopted or not. For low level and medium field crops exceedances ranged between 40 - 80% cases. The computer modelling may be considered as an practical tool for the hazard assessment when the faulty agricultural sprayers are used. It also may be applied for programming the quality checks and maintenance systems of this equipment.

  20. High-speed photon-counting x-ray computed tomography system utilizing a multipixel photon counter

    NASA Astrophysics Data System (ADS)

    Sato, Eiichi; Enomoto, Toshiyuki; Watanabe, Manabu; Hitomi, Keitaro; Takahashi, Kiyomi; Sato, Shigehiro; Ogawa, Akiro; Onagawa, Jun

    2009-07-01

    High-speed photon counting is useful for discriminating photon energy and for decreasing absorbed dose for patients in medical radiography, and the counting is usable for constructing an x-ray computed tomography (CT) system. A photon-counting x-ray CT system is of the first generation type and consists of an x-ray generator, a turn table, a translation stage, a two-stage controller, a multipixel photon counter (MPPC) module, a 1.0-mm-thick LSO crystal (scintillator), a counter card (CC), and a personal computer (PC). Tomography is accomplished by repeating the linear scanning and the rotation of an object, and projection curves of the object are obtained by the linear scanning using the detector consisting of a MPPC module and the LSO. The pulses of the event signal from the module are counted by the CC in conjunction with the PC. The lower level of the photon energy is roughly determined by a comparator circuit in the module, and the unit of the level is the photon equivalent (pe). Thus, the average photon energy of the x-ray spectra increases with increasing the lower-level voltage of the comparator. The maximum count rate was approximately 20 Mcps, and energy-discriminated CT was roughly carried out.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave ordermore » of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5« less

  2. Hybrid MPI/OpenMP Implementation of the ORAC Molecular Dynamics Program for Generalized Ensemble and Fast Switching Alchemical Simulations.

    PubMed

    Procacci, Piero

    2016-06-27

    We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac .

  3. High-Level Prediction Signals in a Low-Level Area of the Macaque Face-Processing Hierarchy.

    PubMed

    Schwiedrzik, Caspar M; Freiwald, Winrich A

    2017-09-27

    Theories like predictive coding propose that lower-order brain areas compare their inputs to predictions derived from higher-order representations and signal their deviation as a prediction error. Here, we investigate whether the macaque face-processing system, a three-level hierarchy in the ventral stream, employs such a coding strategy. We show that after statistical learning of specific face sequences, the lower-level face area ML computes the deviation of actual from predicted stimuli. But these signals do not reflect the tuning characteristic of ML. Rather, they exhibit identity specificity and view invariance, the tuning properties of higher-level face areas AL and AM. Thus, learning appears to endow lower-level areas with the capability to test predictions at a higher level of abstraction than what is afforded by the feedforward sweep. These results provide evidence for computational architectures like predictive coding and suggest a new quality of functional organization of information-processing hierarchies beyond pure feedforward schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. The effect of high-pressure devitrification and densification on ballistic-penetration resistance of fused silica

    NASA Astrophysics Data System (ADS)

    Avuthu, Vasudeva Reddy

    Despite the clear benefits offered by more advanced transparent materials, (e.g. transparent ceramics offer a very attractive combination of high stiffness and high hardness levels, highly-ductile transparent polymers provide superior fragment-containing capabilities, etc.), ballistic ceramic-glass like fused-silica remains an important constituent material in a majority of transparent impact-resistant structures (e.g. windshields and windows of military vehicles, portholes in ships, ground vehicles and spacecraft) used today. Among the main reasons for the wide-scale use of glass, the following three are most frequently cited: (i) glass-structure fabrication technologies enable the production of curved, large surface-area, transparent structures with thickness approaching several inches; (ii) relatively low material and manufacturing costs; and (iii) compositional modifications, chemical strengthening, and controlled crystallization have been demonstrated to be capable of significantly improving the ballistic properties of glass. In the present work, the potential of high-pressure devitrification and densification of fused-silica as a ballistic-resistance-enhancement mechanism is investigated computationally. In the first part of the present work, all-atom molecular-level computations are carried out to infer the dynamic response and material microstructure/topology changes of fused silica subjected to ballistic impact by a nanometer-sized hard projectile. The analysis was focused on the investigation of specific aspects of the dynamic response and of the microstructural changes such as the deformation of highly sheared and densified regions, and the conversion of amorphous fused silica to SiO2 crystalline allotropic modifications (in particular, alpha-quartz and stishovite). The microstructural changes in question were determined by carrying out a post-processing atom-coordination procedure. This procedure suggested the formation of high-density stishovite (and perhaps alpha-quartz) within fused silica during ballistic impact. To rationalize the findings obtained, the all-atom molecular-level computational analysis is complemented by a series of quantum-mechanics density functional theory (DFT) computations. The latter computations enable determination of the relative potential energies of the fused silica, alpha-quartz and stishovite under ambient pressure (i.e. under their natural densities) as well as under imposed (as high as 50 GPa) pressures (i.e. under higher densities) and shear strains. In addition, the transition states associated with various fused-silica devitrification processes were identified. In the second part of the present work, the molecular-level computational results obtained in the first portion of the work are used to enrich a continuum-type constitutive model (that is, the so-called Johnson-Holmquist-2, JH2, model) for fused silica. Since the aforementioned devitrification and permanent-densification processes modify the response of fused silica to the pressure as well as to the deviatoric part of the stress, changes had to be made in both the JH2 equation of state and the strength model. To assess the potential improvements with respect to the ballistic-penetration resistance of this material brought about by the fused-silica devitrification and permanent-densification processes, a series of transient non-linear dynamics finite element analyses of the transverse impact of a fused-silica test plate with a solid right-circular cylindrical steel projectile was conducted. The results obtained revealed that, provided the projectile incident velocity and, hence, the attendant pressure, is sufficiently high, fused silica can undergo impact-induced energy-consuming devitrification, which improves its ballistic-penetration resistance.

  5. Acceptability of a Virtual Patient Educator for Hispanic Women.

    PubMed

    Wells, Kristen J; Vàzquez-Otero, Coralia; Bredice, Marissa; Meade, Cathy D; Chaet, Alexis; Rivera, Maria I; Arroyo, Gloria; Proctor, Sara K; Barnes, Laura E

    2015-01-01

    There are few Spanish language interactive, technology-driven health education programs. Objectives of this feasibility study were to (a) learn more about computer and technology usage among Hispanic women living in a rural community and (b) evaluate acceptability of the concept of using an embodied conversational agent (ECA) computer application among this population. A survey about computer usage history and interest in computers was administered to a convenience sample of 26 women. A sample video prototype of a hospital discharge ECA was administered followed by questions to gauge opinion about the ECA. Data indicate women exhibited both a high level of computer experience and enthusiasm for the ECA. Feedback from community is essential to ensure equity in state of the art dissemination of health information.

  6. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Schöbi, Roland; Sudret, Bruno

    2017-06-01

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.

  7. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch

    2017-06-15

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less

  8. Real-time fuzzy inference based robot path planning

    NASA Technical Reports Server (NTRS)

    Pacini, Peter J.; Teichrow, Jon S.

    1990-01-01

    This project addresses the problem of adaptive trajectory generation for a robot arm. Conventional trajectory generation involves computing a path in real time to minimize a performance measure such as expended energy. This method can be computationally intensive, and it may yield poor results if the trajectory is weakly constrained. Typically some implicit constraints are known, but cannot be encoded analytically. The alternative approach used here is to formulate domain-specific knowledge, including implicit and ill-defined constraints, in terms of fuzzy rules. These rules utilize linguistic terms to relate input variables to output variables. Since the fuzzy rulebase is determined off-line, only high-level, computationally light processing is required in real time. Potential applications for adaptive trajectory generation include missile guidance and various sophisticated robot control tasks, such as automotive assembly, high speed electrical parts insertion, stepper alignment, and motion control for high speed parcel transfer systems.

  9. Positrons vs electrons channeling in silicon crystal: energy levels, wave functions and quantum chaos manifestations

    NASA Astrophysics Data System (ADS)

    Shul'ga, N. F.; Syshchenko, V. V.; Tarnovsky, A. I.; Solovyev, I. I.; Isupov, A. Yu.

    2018-01-01

    The motion of fast electrons through the crystal during axial channeling could be regular and chaotic. The dynamical chaos in quantum systems manifests itself in both statistical properties of energy spectra and morphology of wave functions of the individual stationary states. In this report, we investigate the axial channeling of high and low energy electrons and positrons near [100] direction of a silicon crystal. This case is particularly interesting because of the fact that the chaotic motion domain occupies only a small part of the phase space for the channeling electrons whereas the motion of the channeling positrons is substantially chaotic for the almost all initial conditions. The energy levels of transverse motion, as well as the wave functions of the stationary states, have been computed numerically. The group theory methods had been used for classification of the computed eigenfunctions and identification of the non-degenerate and doubly degenerate energy levels. The channeling radiation spectrum for the low energy electrons has been also computed.

  10. Automating software design system DESTA

    NASA Technical Reports Server (NTRS)

    Lovitsky, Vladimir A.; Pearce, Patricia D.

    1992-01-01

    'DESTA' is the acronym for the Dialogue Evolutionary Synthesizer of Turnkey Algorithms by means of a natural language (Russian or English) functional specification of algorithms or software being developed. DESTA represents the computer-aided and/or automatic artificial intelligence 'forgiving' system which provides users with software tools support for algorithm and/or structured program development. The DESTA system is intended to provide support for the higher levels and earlier stages of engineering design of software in contrast to conventional Computer Aided Design (CAD) systems which provide low level tools for use at a stage when the major planning and structuring decisions have already been taken. DESTA is a knowledge-intensive system. The main features of the knowledge are procedures, functions, modules, operating system commands, batch files, their natural language specifications, and their interlinks. The specific domain for the DESTA system is a high level programming language like Turbo Pascal 6.0. The DESTA system is operational and runs on an IBM PC computer.

  11. Ab Initio Simulations and Electronic Structure of Lithium-Doped Ionic Liquids: Structure, Transport, and Electrochemical Stability.

    PubMed

    Haskins, Justin B; Bauschlicher, Charles W; Lawson, John W

    2015-11-19

    Density functional theory (DFT), density functional theory molecular dynamics (DFT-MD), and classical molecular dynamics using polarizable force fields (PFF-MD) are employed to evaluate the influence of Li(+) on the structure, transport, and electrochemical stability of three potential ionic liquid electrolytes: N-methyl-N-butylpyrrolidinium bis(trifluoromethanesulfonyl)imide ([pyr14][TFSI]), N-methyl-N-propylpyrrolidinium bis(fluorosulfonyl)imide ([pyr13][FSI]), and 1-ethyl-3-methylimidazolium boron tetrafluoride ([EMIM][BF4]). We characterize the Li(+) solvation shell through DFT computations of [Li(Anion)n]((n-1)-) clusters, DFT-MD simulations of isolated Li(+) in small ionic liquid systems, and PFF-MD simulations with high Li-doping levels in large ionic liquid systems. At low levels of Li-salt doping, highly stable solvation shells having two to three anions are seen in both [pyr14][TFSI] and [pyr13][FSI], whereas solvation shells with four anions dominate in [EMIM][BF4]. At higher levels of doping, we find the formation of complex Li-network structures that increase the frequency of four anion-coordinated solvation shells. A comparison of computational and experimental Raman spectra for a wide range of [Li(Anion)n]((n-1)-) clusters shows that our proposed structures are consistent with experiment. We then compute the ion diffusion coefficients and find measures from small-cell DFT-MD simulations to be the correct order of magnitude, but influenced by small system size and short simulation length. Correcting for these errors with complementary PFF-MD simulations, we find DFT-MD measures to be in close agreement with experiment. Finally, we compute electrochemical windows from DFT computations on isolated ions, interacting cation/anion pairs, and liquid-phase systems with Li-doping. For the molecular-level computations, we generally find the difference between ionization energy and electron affinity from isolated ions and interacting cation/anion pairs to provide upper and lower bounds, respectively, to experiment. In the liquid phase, we find the difference between the lowest unoccupied and highest occupied electronic levels in pure and hybrid functionals to provide lower and upper bounds, respectively, to experiment. Li-doping in the liquid-phase systems results in electrochemical windows little changed from the neat systems.

  12. Gender stereotypes, aggression, and computer games: an online survey of women.

    PubMed

    Norris, Kamala O

    2004-12-01

    Computer games were conceptualized as a potential mode of entry into computer-related employment for women. Computer games contain increasing levels of realism and violence, as well as biased gender portrayals. It has been suggested that aggressive personality characteristics attract people to aggressive video games, and that more women do not play computer games because they are socialized to be non-aggressive. To explore gender identity and aggressive personality in the context of computers, an online survey was conducted on women who played computer games and women who used the computer but did not play computer games. Women who played computer games perceived their online environments as less friendly but experienced less sexual harassment online, were more aggressive themselves, and did not differ in gender identity, degree of sex role stereotyping, or acceptance of sexual violence when compared to women who used the computer but did not play video games. Finally, computer gaming was associated with decreased participation in computer-related employment; however, women with high masculine gender identities were more likely to use computers at work.

  13. High Performance Computing Innovation Service Portal Study (HPC-ISP)

    DTIC Science & Technology

    2009-04-01

    threatened by global competition. It is essential that these suppliers remain competitive and maintain their technological advantage . In this increasingly...place themselves, as well as customers who rely on them, in competitive jeopardy. Despite the potential competitive advantage associated with adopting...computing users into the HPC fold and to enable more entry-level users to exploit HPC more fully for competitive advantage . About half of the surveyed

  14. The Impact of Animation in CD-ROM Books on Students' Reading Behaviors and Comprehension.

    ERIC Educational Resources Information Center

    Okolo, Cindy; Hayes, Renee

    This study evaluated the use of children's literature presented via one of three conditions: an adult reading a book to the child; the child reading a CD-ROM version of a book on the computer but without animation; and the child reading the book on the computer with high levels of animation. The study, in one primary grade classroom, involved 10…

  15. Exploring the Nature of the H[subscript 2] Bond. 2. Using Ab Initio Molecular Orbital Calculations to Obtain the Molecular Constants

    ERIC Educational Resources Information Center

    Halpern, Arthur M.; Glendening, Eric D.

    2013-01-01

    A project for students in an upper-level course in quantum or computational chemistry is described in which they are introduced to the concepts and applications of a high quality, ab initio treatment of the ground-state potential energy curve (PEC) for H[subscript 2] and D[subscript 2]. Using a commercial computational chemistry application and a…

  16. The computational structural mechanics testbed procedures manual

    NASA Technical Reports Server (NTRS)

    Stewart, Caroline B. (Compiler)

    1991-01-01

    The purpose of this manual is to document the standard high level command language procedures of the Computational Structural Mechanics (CSM) Testbed software system. A description of each procedure including its function, commands, data interface, and use is presented. This manual is designed to assist users in defining and using command procedures to perform structural analysis in the CSM Testbed User's Manual and the CSM Testbed Data Library Description.

  17. Prognostic Significance of Tumor Size of Small Lung Adenocarcinomas Evaluated with Mediastinal Window Settings on Computed Tomography

    PubMed Central

    Sakao, Yukinori; Kuroda, Hiroaki; Mun, Mingyon; Uehara, Hirofumi; Motoi, Noriko; Ishikawa, Yuichi; Nakagawa, Ken; Okumura, Sakae

    2014-01-01

    Background We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. Methods We evaluated 176 patients with small lung adenocarcinomas (diameter, 1–3 cm) who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution computed tomography) with tumour dimensions evaluated under two settings: lung window and mediastinal window. We also determined the patient age, gender, preoperative nodal status, tumour size, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and pathological status (lymphatic vessel, vascular vessel or pleural invasion). Recurrence-free survival was used for prognosis. Results Lung window, mediastinal window, tumour disappearance ratio and preoperative nodal status were significant predictive factors for recurrence-free survival in univariate analyses. Areas under the receiver operator curves for recurrence were 0.76, 0.73 and 0.65 for mediastinal window, tumour disappearance ratio and lung window, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant predictive factors for lymph node metastasis in univariate analyses; areas under the receiver operator curves were 0.61, 0.76, 0.72 and 0.66, for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant factors for lymphatic vessel, vascular vessel or pleural invasion in univariate analyses; areas under the receiver operator curves were 0.60, 0.81, 0.81 and 0.65 for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Conclusions According to the univariate analyses including a logistic regression and ROCs performed for variables with p-values of <0.05 on univariate analyses, our results suggest that measuring tumour size using mediastinal window on high-resolution computed tomography is a simple and useful preoperative prognosis modality in small adenocarcinoma. PMID:25365326

  18. Systematic comparison of the behaviors produced by computational models of epileptic neocortex.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warlaumont, A. S.; Lee, H. C.; Benayoun, M.

    2010-12-01

    Two existing models of brain dynamics in epilepsy, one detailed (i.e., realistic) and one abstract (i.e., simplified) are compared in terms of behavioral range and match to in vitro mouse recordings. A new method is introduced for comparing across computational models that may have very different forms. First, high-level metrics were extracted from model and in vitro output time series. A principal components analysis was then performed over these metrics to obtain a reduced set of derived features. These features define a low-dimensional behavior space in which quantitative measures of behavioral range and degree of match to real data canmore » be obtained. The detailed and abstract models and the mouse recordings overlapped considerably in behavior space. Both the range of behaviors and similarity to mouse data were similar between the detailed and abstract models. When no high-level metrics were used and principal components analysis was computed over raw time series, the models overlapped minimally with the mouse recordings. The method introduced here is suitable for comparing across different kinds of model data and across real brain recordings. It appears that, despite differences in form and computational expense, detailed and abstract models do not necessarily differ in their behaviors.« less

  19. Role of cognitive assessment for high school graduates prior to choosing their college major.

    PubMed

    AlAbdulwahab, Sami S; Kachanathu, Shaji John; AlSaeed, Abdullah Saad

    2018-02-01

    [Purpose] Academic performance of college students can be impacted by the efficacy of students' ability and teaching methods. It is important to assess the progression of college students' cognitive abilities among different college majors and as they move from junior to senior levels. However, dearth of studies have been examined the role of cognitive ability tests as a tool to determine the aptitude of the perspective students. Therefore, this study assessed cognitive abilities of computer science and ART students. [Subjects and Methods] Participants were 130 college students (70 computer and 60 art students) in their first and final years of study at King Saud University. Cognitive ability was assessed using the Test of Nonverbal Intelligence, Third Edition. [Results] The cognitive ability of computer science students were statistically better than that of art students and were shown improvement from junior to senior levels, while the cognitive ability of art students did not. [Conclusion] The cognitive ability of computer science college students was superior compared to those in art, indicating the importance of cognitive ability assessment for high school graduates prior to choosing a college major. Cognitive scales should be included as an aptitude assessment tool for decision-makers and prospective students to determine an appropriate career, which might reduce the rate of university drop out.

  20. Use of handheld computers in clinical practice: a systematic review.

    PubMed

    Mickan, Sharon; Atherton, Helen; Roberts, Nia Wyn; Heneghan, Carl; Tilson, Julie K

    2014-07-06

    Many healthcare professionals use smartphones and tablets to inform patient care. Contemporary research suggests that handheld computers may support aspects of clinical diagnosis and management. This systematic review was designed to synthesise high quality evidence to answer the question; Does healthcare professionals' use of handheld computers improve their access to information and support clinical decision making at the point of care? A detailed search was conducted using Cochrane, MEDLINE, EMBASE, PsycINFO, Science and Social Science Citation Indices since 2001. Interventions promoting healthcare professionals seeking information or making clinical decisions using handheld computers were included. Classroom learning and the use of laptop computers were excluded. Two authors independently selected studies, assessed quality using the Cochrane Risk of Bias tool and extracted data. High levels of data heterogeneity negated statistical synthesis. Instead, evidence for effectiveness was summarised narratively, according to each study's aim for assessing the impact of handheld computer use. We included seven randomised trials investigating medical or nursing staffs' use of Personal Digital Assistants. Effectiveness was demonstrated across three distinct functions that emerged from the data: accessing information for clinical knowledge, adherence to guidelines and diagnostic decision making. When healthcare professionals used handheld computers to access clinical information, their knowledge improved significantly more than peers who used paper resources. When clinical guideline recommendations were presented on handheld computers, clinicians made significantly safer prescribing decisions and adhered more closely to recommendations than peers using paper resources. Finally, healthcare professionals made significantly more appropriate diagnostic decisions using clinical decision making tools on handheld computers compared to colleagues who did not have access to these tools. For these clinical decisions, the numbers need to test/screen were all less than 11. Healthcare professionals' use of handheld computers may improve their information seeking, adherence to guidelines and clinical decision making. Handheld computers can provide real time access to and analysis of clinical information. The integration of clinical decision support systems within handheld computers offers clinicians the highest level of synthesised evidence at the point of care. Future research is needed to replicate these early results and to identify beneficial clinical outcomes.

  1. Use of handheld computers in clinical practice: a systematic review

    PubMed Central

    2014-01-01

    Background Many healthcare professionals use smartphones and tablets to inform patient care. Contemporary research suggests that handheld computers may support aspects of clinical diagnosis and management. This systematic review was designed to synthesise high quality evidence to answer the question; Does healthcare professionals’ use of handheld computers improve their access to information and support clinical decision making at the point of care? Methods A detailed search was conducted using Cochrane, MEDLINE, EMBASE, PsycINFO, Science and Social Science Citation Indices since 2001. Interventions promoting healthcare professionals seeking information or making clinical decisions using handheld computers were included. Classroom learning and the use of laptop computers were excluded. Two authors independently selected studies, assessed quality using the Cochrane Risk of Bias tool and extracted data. High levels of data heterogeneity negated statistical synthesis. Instead, evidence for effectiveness was summarised narratively, according to each study’s aim for assessing the impact of handheld computer use. Results We included seven randomised trials investigating medical or nursing staffs’ use of Personal Digital Assistants. Effectiveness was demonstrated across three distinct functions that emerged from the data: accessing information for clinical knowledge, adherence to guidelines and diagnostic decision making. When healthcare professionals used handheld computers to access clinical information, their knowledge improved significantly more than peers who used paper resources. When clinical guideline recommendations were presented on handheld computers, clinicians made significantly safer prescribing decisions and adhered more closely to recommendations than peers using paper resources. Finally, healthcare professionals made significantly more appropriate diagnostic decisions using clinical decision making tools on handheld computers compared to colleagues who did not have access to these tools. For these clinical decisions, the numbers need to test/screen were all less than 11. Conclusion Healthcare professionals’ use of handheld computers may improve their information seeking, adherence to guidelines and clinical decision making. Handheld computers can provide real time access to and analysis of clinical information. The integration of clinical decision support systems within handheld computers offers clinicians the highest level of synthesised evidence at the point of care. Future research is needed to replicate these early results and to identify beneficial clinical outcomes. PMID:24998515

  2. Accumulated source imaging of brain activity with both low and high-frequency neuromagnetic signals

    PubMed Central

    Xiang, Jing; Luo, Qian; Kotecha, Rupesh; Korman, Abraham; Zhang, Fawen; Luo, Huan; Fujiwara, Hisako; Hemasilpin, Nat; Rose, Douglas F.

    2014-01-01

    Recent studies have revealed the importance of high-frequency brain signals (>70 Hz). One challenge of high-frequency signal analysis is that the size of time-frequency representation of high-frequency brain signals could be larger than 1 terabytes (TB), which is beyond the upper limits of a typical computer workstation's memory (<196 GB). The aim of the present study is to develop a new method to provide greater sensitivity in detecting high-frequency magnetoencephalography (MEG) signals in a single automated and versatile interface, rather than the more traditional, time-intensive visual inspection methods, which may take up to several days. To address the aim, we developed a new method, accumulated source imaging, defined as the volumetric summation of source activity over a period of time. This method analyzes signals in both low- (1~70 Hz) and high-frequency (70~200 Hz) ranges at source levels. To extract meaningful information from MEG signals at sensor space, the signals were decomposed to channel-cross-channel matrix (CxC) representing the spatiotemporal patterns of every possible sensor-pair. A new algorithm was developed and tested by calculating the optimal CxC and source location-orientation weights for volumetric source imaging, thereby minimizing multi-source interference and reducing computational cost. The new method was implemented in C/C++ and tested with MEG data recorded from clinical epilepsy patients. The results of experimental data demonstrated that accumulated source imaging could effectively summarize and visualize MEG recordings within 12.7 h by using approximately 10 GB of computer memory. In contrast to the conventional method of visually identifying multi-frequency epileptic activities that traditionally took 2–3 days and used 1–2 TB storage, the new approach can quantify epileptic abnormalities in both low- and high-frequency ranges at source levels, using much less time and computer memory. PMID:24904402

  3. Accumulated source imaging of brain activity with both low and high-frequency neuromagnetic signals.

    PubMed

    Xiang, Jing; Luo, Qian; Kotecha, Rupesh; Korman, Abraham; Zhang, Fawen; Luo, Huan; Fujiwara, Hisako; Hemasilpin, Nat; Rose, Douglas F

    2014-01-01

    Recent studies have revealed the importance of high-frequency brain signals (>70 Hz). One challenge of high-frequency signal analysis is that the size of time-frequency representation of high-frequency brain signals could be larger than 1 terabytes (TB), which is beyond the upper limits of a typical computer workstation's memory (<196 GB). The aim of the present study is to develop a new method to provide greater sensitivity in detecting high-frequency magnetoencephalography (MEG) signals in a single automated and versatile interface, rather than the more traditional, time-intensive visual inspection methods, which may take up to several days. To address the aim, we developed a new method, accumulated source imaging, defined as the volumetric summation of source activity over a period of time. This method analyzes signals in both low- (1~70 Hz) and high-frequency (70~200 Hz) ranges at source levels. To extract meaningful information from MEG signals at sensor space, the signals were decomposed to channel-cross-channel matrix (CxC) representing the spatiotemporal patterns of every possible sensor-pair. A new algorithm was developed and tested by calculating the optimal CxC and source location-orientation weights for volumetric source imaging, thereby minimizing multi-source interference and reducing computational cost. The new method was implemented in C/C++ and tested with MEG data recorded from clinical epilepsy patients. The results of experimental data demonstrated that accumulated source imaging could effectively summarize and visualize MEG recordings within 12.7 h by using approximately 10 GB of computer memory. In contrast to the conventional method of visually identifying multi-frequency epileptic activities that traditionally took 2-3 days and used 1-2 TB storage, the new approach can quantify epileptic abnormalities in both low- and high-frequency ranges at source levels, using much less time and computer memory.

  4. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  5. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking

    PubMed Central

    Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems. PMID:27187178

  6. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking.

    PubMed

    Pârvu, Ovidiu; Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems.

  7. Precision measurements and computations of transition energies in rotationally cold triatomic hydrogen ions up to the midvisible spectral range.

    PubMed

    Pavanello, Michele; Adamowicz, Ludwik; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Polyansky, Oleg L; Tennyson, Jonathan; Szidarovszky, Tamás; Császár, Attila G; Berg, Max; Petrignani, Annemieke; Wolf, Andreas

    2012-01-13

    First-principles computations and experimental measurements of transition energies are carried out for vibrational overtone lines of the triatomic hydrogen ion H(3)(+) corresponding to floppy vibrations high above the barrier to linearity. Action spectroscopy is improved to detect extremely weak visible-light spectral lines on cold trapped H(3)(+) ions. A highly accurate potential surface is obtained from variational calculations using explicitly correlated Gaussian wave function expansions. After nonadiabatic corrections, the floppy H(3)(+) vibrational spectrum is reproduced at the 0.1 cm(-1) level up to 16600 cm(-1).

  8. Prediction and characterization of application power use in a high-performance computing environment

    DOE PAGES

    Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...

    2017-02-27

    Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.

  9. Impact of surface coupling grids on tropical cyclone extremes in high-resolution atmospheric simulations

    DOE PAGES

    Zarzycki, Colin M.; Reed, Kevin A.; Bacmeister, Julio T.; ...

    2016-02-25

    This article discusses the sensitivity of tropical cyclone climatology to surface coupling strategy in high-resolution configurations of the Community Earth System Model. Using two supported model setups, we demonstrate that the choice of grid on which the lowest model level wind stress and surface fluxes are computed may lead to differences in cyclone strength in multi-decadal climate simulations, particularly for the most intense cyclones. Using a deterministic framework, we show that when these surface quantities are calculated on an ocean grid that is coarser than the atmosphere, the computed frictional stress is misaligned with wind vectors in individual atmospheric gridmore » cells. This reduces the effective surface drag, and results in more intense cyclones when compared to a model configuration where the ocean and atmosphere are of equivalent resolution. Our results demonstrate that the choice of computation grid for atmosphere–ocean interactions is non-negligible when considering climate extremes at high horizontal resolution, especially when model components are on highly disparate grids.« less

  10. Effect of Fuel Injection and Mixing Characteristics on Pulse-Combustor Performance at High-Pressure

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Paxson, Daniel E.; Perkins, Hugh D.

    2014-01-01

    Recent calculations of pulse-combustors operating at high-pressure conditions produced pressure gains significantly lower than those observed experimentally and computationally at atmospheric conditions. The factors limiting the pressure-gain at high-pressure conditions are identified, and the effects of fuel injection and air mixing characteristics on performance are investigated. New pulse-combustor configurations were developed, and the results show that by suitable changes to the combustor geometry, fuel injection scheme and valve dynamics the performance of the pulse-combustor operating at high-pressure conditions can be increased to levels comparable to those observed at atmospheric conditions. In addition, the new configurations can significantly reduce the levels of NOx emissions. One particular configuration resulted in extremely low levels of NO, producing an emission index much less than one, although at a lower pressure-gain. Calculations at representative cruise conditions demonstrated that pulse-combustors can achieve a high level of performance at such conditions.

  11. Computational Study of Near-limit Propagation of Detonation in Hydrogen-air Mixtures

    NASA Technical Reports Server (NTRS)

    Yungster, S.; Radhakrishnan, K.

    2002-01-01

    A computational investigation of the near-limit propagation of detonation in lean and rich hydrogen-air mixtures is presented. The calculations were carried out over an equivalence ratio range of 0.4 to 5.0, pressures ranging from 0.2 bar to 1.0 bar and ambient initial temperature. The computations involved solution of the one-dimensional Euler equations with detailed finite-rate chemistry. The numerical method is based on a second-order spatially accurate total-variation-diminishing (TVD) scheme, and a point implicit, first-order-accurate, time marching algorithm. The hydrogen-air combustion was modeled with a 9-species, 19-step reaction mechanism. A multi-level, dynamically adaptive grid was utilized in order to resolve the structure of the detonation. The results of the computations indicate that when hydrogen concentrations are reduced below certain levels, the detonation wave switches from a high-frequency, low amplitude oscillation mode to a low frequency mode exhibiting large fluctuations in the detonation wave speed; that is, a 'galloping' propagation mode is established.

  12. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits

    PubMed Central

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-01-01

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.10056.001 PMID:26705334

  13. A Framework for Assessing High School Students' Intercultural Communicative Competence in a Computer-Mediated Language Learning Project

    ERIC Educational Resources Information Center

    Peng, Hsinyi; Lu, Wei-Hsin; Wang, Chao-I

    2009-01-01

    The purposes of this study were to identify the essential dimensions of intercultural communicative competence (ICC) and to establish a framework for assessing the ICC level of high school students that included a self-report inventory and scoring rubrics for online interaction in intercultural contexts. A total of 472 high school students from…

  14. Is Technology-Mediated Parental Monitoring Related to Adolescent Substance Use?

    PubMed

    Rudi, Jessie; Dworkin, Jodi

    2018-01-03

    Prevention researchers have identified parental monitoring leading to parental knowledge to be a protective factor against adolescent substance use. In today's digital society, parental monitoring can occur using technology-mediated communication methods, such as text messaging, email, and social networking sites. The current study aimed to identify patterns, or clusters, of in-person and technology-mediated monitoring behaviors, and examine differences between the patterns (clusters) in adolescent substance use. Cross-sectional survey data were collected from 289 parents of adolescents using Facebook and Amazon Mechanical Turk (MTurk). Cluster analyses were computed to identify patterns of in-person and technology-mediated monitoring behaviors, and chi-square analyses were computed to examine differences in substance use between the identified clusters. Three monitoring clusters were identified: a moderate in-person and moderate technology-mediated monitoring cluster (moderate-moderate), a high in-person and high technology-mediated monitoring cluster (high-high), and a high in-person and low technology-mediated monitoring cluster (high-low). Higher frequency of technology-mediated parental monitoring was not associated with lower levels of substance use. Results show that higher levels of technology-mediated parental monitoring may not be associated with adolescent substance use.

  15. Noise optimization of a regenerative automotive fuel pump

    NASA Astrophysics Data System (ADS)

    Wang, J. F.; Feng, H. H.; Mou, X. L.; Huang, Y. X.

    2017-03-01

    The regenerative pump used in automotive is facing a noise problem. To understand the mechanism in detail, Computational Fluid Dynamics (CFD) and Computational Acoustic Analysis (CAA) together were used to understand the fluid and acoustic characteristics of the fuel pump using ANSYS-CFX 15.0 and LMS Virtual. Lab Rev12, respectively. The CFD model and acoustical model were validated by mass flow rate test and sound pressure test, respectively. Comparing the computational and experimental results shows that sound pressure levels at the observer position are consistent at high frequencies, especially at blade passing frequency. After validating the models, several numerical models were analyzed in the study for noise improvement. It is observed that for configuration having greater number of impeller blades, noise level was significantly improved at blade passing frequency, when compared to that of the original model.

  16. NEXUS - Resilient Intelligent Middleware

    NASA Astrophysics Data System (ADS)

    Kaveh, N.; Hercock, R. Ghanea

    Service-oriented computing, a composition of distributed-object computing, component-based, and Web-based concepts, is becoming the widespread choice for developing dynamic heterogeneous software assets available as services across a network. One of the major strengths of service-oriented technologies is the high abstraction layer and large granularity level at which software assets are viewed compared to traditional object-oriented technologies. Collaboration through encapsulated and separately defined service interfaces creates a service-oriented environment, whereby multiple services can be linked together through their interfaces to compose a functional system. This approach enables better integration of legacy and non-legacy services, via wrapper interfaces, and allows for service composition at a more abstract level especially in cases such as vertical market stacks. The heterogeneous nature of service-oriented technologies and the granularity of their software components makes them a suitable computing model in the pervasive domain.

  17. TOSCA-based orchestration of complex clusters at the IaaS level

    NASA Astrophysics Data System (ADS)

    Caballer, M.; Donvito, G.; Moltó, G.; Rocha, R.; Velten, M.

    2017-10-01

    This paper describes the adoption and extension of the TOSCA standard by the INDIGO-DataCloud project for the definition and deployment of complex computing clusters together with the required support in both OpenStack and OpenNebula, carried out in close collaboration with industry partners such as IBM. Two examples of these clusters are described in this paper, the definition of an elastic computing cluster to support the Galaxy bioinformatics application where the nodes are dynamically added and removed from the cluster to adapt to the workload, and the definition of an scalable Apache Mesos cluster for the execution of batch jobs and support for long-running services. The coupling of TOSCA with Ansible Roles to perform automated installation has resulted in the definition of high-level, deterministic templates to provision complex computing clusters across different Cloud sites.

  18. Muons in the CMS High Level Trigger System

    NASA Astrophysics Data System (ADS)

    Verwilligen, Piet; CMS Collaboration

    2016-04-01

    The trigger systems of LHC detectors play a fundamental role in defining the physics capabilities of the experiments. A reduction of several orders of magnitude in the rate of collected events, with respect to the proton-proton bunch crossing rate generated by the LHC, is mandatory to cope with the limits imposed by the readout and storage system. An accurate and efficient online selection mechanism is thus required to fulfill the task keeping maximal the acceptance to physics signals. The CMS experiment operates using a two-level trigger system. Firstly a Level-1 Trigger (L1T) system, implemented using custom-designed electronics, is designed to reduce the event rate to a limit compatible to the CMS Data Acquisition (DAQ) capabilities. A High Level Trigger System (HLT) follows, aimed at further reducing the rate of collected events finally stored for analysis purposes. The latter consists of a streamlined version of the CMS offline reconstruction software and operates on a computer farm. It runs algorithms optimized to make a trade-off between computational complexity, rate reduction and high selection efficiency. With the computing power available in 2012 the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. An efficient selection of muons at HLT, as well as an accurate measurement of their properties, such as transverse momentum and isolation, is fundamental for the CMS physics programme. The performance of the muon HLT for single and double muon triggers achieved in Run I will be presented. Results from new developments, aimed at improving the performance of the algorithms for the harsher scenarios of collisions per event (pile-up) and luminosity expected for Run II will also be discussed.

  19. Assessment of Slat Noise Predictions for 30P30N High-Lift Configuration From BANC-III Workshop

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Lockard, David P.

    2015-01-01

    This paper presents a summary of the computational predictions and measurement data contributed to Category 7 of the 3rd AIAA Workshop on Benchmark Problems for Airframe Noise Computations (BANC-III), which was held in Atlanta, GA, on June 14-15, 2014. Category 7 represents the first slat-noise configuration to be investigated under the BANC series of workshops, namely, the 30P30N two-dimensional high-lift model (with a slat contour that was slightly modified to enable unsteady pressure measurements) at an angle of attack that is relevant to approach conditions. Originally developed for a CFD challenge workshop to assess computational fluid dynamics techniques for steady high-lift predictions, the 30P30N configurations has provided a valuable opportunity for the airframe noise community to collectively assess and advance the computational and experimental techniques for slat noise. The contributed solutions are compared with each other as well as with the initial measurements that became available just prior to the BANC-III Workshop. Specific features of a number of computational solutions on the finer grids compare reasonably well with the initial measurements from FSU and JAXA facilities and/or with each other. However, no single solution (or a subset of solutions) could be identified as clearly superior to the remaining solutions. Grid sensitivity studies presented by multiple BANC-III participants demonstrated a relatively consistent trend of reduced surface pressure fluctuations, higher levels of turbulent kinetic energy in the flow, and lower levels of both narrow band peaks and the broadband component of unsteady pressure spectra in the nearfield and farfield. The lessons learned from the BANC-III contributions have been used to identify improvements to the problem statement for future Category-7 investigations.

  20. A C++11 implementation of arbitrary-rank tensors for high-performance computing

    NASA Astrophysics Data System (ADS)

    Aragón, Alejandro M.

    2014-06-01

    This article discusses an efficient implementation of tensors of arbitrary rank by using some of the idioms introduced by the recently published C++ ISO Standard (C++11). With the aims at providing a basic building block for high-performance computing, a single Array class template is carefully crafted, from which vectors, matrices, and even higher-order tensors can be created. An expression template facility is also built around the array class template to provide convenient mathematical syntax. As a result, by using templates, an extra high-level layer is added to the C++ language when dealing with algebraic objects and their operations, without compromising performance. The implementation is tested running on both CPU and GPU.

  1. A C++11 implementation of arbitrary-rank tensors for high-performance computing

    NASA Astrophysics Data System (ADS)

    Aragón, Alejandro M.

    2014-11-01

    This article discusses an efficient implementation of tensors of arbitrary rank by using some of the idioms introduced by the recently published C++ ISO Standard (C++11). With the aims at providing a basic building block for high-performance computing, a single Array class template is carefully crafted, from which vectors, matrices, and even higher-order tensors can be created. An expression template facility is also built around the array class template to provide convenient mathematical syntax. As a result, by using templates, an extra high-level layer is added to the C++ language when dealing with algebraic objects and their operations, without compromising performance. The implementation is tested running on both CPU and GPU.

  2. Design of a fast computer-based partial discharge diagnostic system

    NASA Technical Reports Server (NTRS)

    Oliva, Jose R.; Karady, G. G.; Domitz, Stan

    1991-01-01

    Partial discharges cause progressive deterioration of insulating materials working in high voltage conditions and may lead ultimately to insulator failure. Experimental findings indicate that deterioration increases with the number of discharges and is consequently proportional to the magnitude and frequency of the applied voltage. In order to obtain a better understanding of the mechanisms of deterioration produced by partial discharges, instrumentation capable of individual pulse resolution is required. A new computer-based partial discharge detection system was designed and constructed to conduct long duration tests on sample capacitors. This system is capable of recording large number of pulses without dead time and producing valuable information related to amplitude, polarity, and charge content of the discharges. The operation of the system is automatic and no human supervision is required during the testing stage. Ceramic capacitors were tested at high voltage in long duration tests. The obtained results indicated that the charge content of partial discharges shift towards high levels of charge as the level of deterioration in the capacitor increases.

  3. Density-functional approach to the three-body dispersion interaction based on the exchange dipole moment

    PubMed Central

    Proynov, Emil; Liu, Fenglai; Gan, Zhengting; Wang, Matthew; Kong, Jing

    2015-01-01

    We implement and compute the density functional nonadditive three-body dispersion interaction using a combination of Tang-Karplus formalism and the exchange-dipole moment model of Becke and Johnson. The computation of the C9 dispersion coefficients is done in a non-empirical fashion. The obtained C9 values of a series of noble atom triplets agree well with highly accurate values in the literature. We also calculate the C9 values for a series of benzene trimers and find a good agreement with high-level ab initio values reported recently in the literature. For the question of damping of the three-body dispersion at short distances, we propose two damping schemes and optimize them based on the benzene trimers data, and the fitted analytic potentials of He3 and Ar3 trimers fitted to the results of high-level wavefunction theories available from the literature. Both damping schemes respond well to the optimization of two parameters. PMID:26328836

  4. Density-functional approach to the three-body dispersion interaction based on the exchange dipole moment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proynov, Emil; Wang, Matthew; Kong, Jing, E-mail: jing.kong@mtsu.edu

    We implement and compute the density functional nonadditive three-body dispersion interaction using a combination of Tang-Karplus formalism and the exchange-dipole moment model of Becke and Johnson. The computation of the C{sub 9} dispersion coefficients is done in a non-empirical fashion. The obtained C{sub 9} values of a series of noble atom triplets agree well with highly accurate values in the literature. We also calculate the C{sub 9} values for a series of benzene trimers and find a good agreement with high-level ab initio values reported recently in the literature. For the question of damping of the three-body dispersion at shortmore » distances, we propose two damping schemes and optimize them based on the benzene trimers data, and the fitted analytic potentials of He{sub 3} and Ar{sub 3} trimers fitted to the results of high-level wavefunction theories available from the literature. Both damping schemes respond well to the optimization of two parameters.« less

  5. Inhibitory Competition between Shape Properties in Figure-Ground Perception

    ERIC Educational Resources Information Center

    Peterson, Mary A.; Skow, Emily

    2008-01-01

    Theories of figure-ground perception entail inhibitory competition between either low-level units (edge or feature units) or high-level shape properties. Extant computational models instantiate the 1st type of theory. The authors investigated a prediction of the 2nd type of theory: that shape properties suggested on the ground side of an edge are…

  6. From Dimer to Crystal: Calculating the Cohesive Energy of Rare Gas Solids

    ERIC Educational Resources Information Center

    Halpern, Arthur M.

    2012-01-01

    An upper-level undergraduate project is described in which students perform high-level ab initio computational scans of the potential energy curves for Ne[subscript 2] and Ar[subscript 2] and obtain the respective Lennard-Jones (LJ) potential parameters [sigma] and [epsilon] for the dimers. Using this information, along with the summation of…

  7. Hierarchical layered and semantic-based image segmentation using ergodicity map

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.

  8. Learning Natural Selection in 4th Grade with Multi-Agent-Based Computational Models

    NASA Astrophysics Data System (ADS)

    Dickes, Amanda Catherine; Sengupta, Pratim

    2013-06-01

    In this paper, we investigate how elementary school students develop multi-level explanations of population dynamics in a simple predator-prey ecosystem, through scaffolded interactions with a multi-agent-based computational model (MABM). The term "agent" in an MABM indicates individual computational objects or actors (e.g., cars), and these agents obey simple rules assigned or manipulated by the user (e.g., speeding up, slowing down, etc.). It is the interactions between these agents, based on the rules assigned by the user, that give rise to emergent, aggregate-level behavior (e.g., formation and movement of the traffic jam). Natural selection is such an emergent phenomenon, which has been shown to be challenging for novices (K16 students) to understand. Whereas prior research on learning evolutionary phenomena with MABMs has typically focused on high school students and beyond, we investigate how elementary students (4th graders) develop multi-level explanations of some introductory aspects of natural selection—species differentiation and population change—through scaffolded interactions with an MABM that simulates predator-prey dynamics in a simple birds-butterflies ecosystem. We conducted a semi-clinical interview based study with ten participants, in which we focused on the following: a) identifying the nature of learners' initial interpretations of salient events or elements of the represented phenomena, b) identifying the roles these interpretations play in the development of their multi-level explanations, and c) how attending to different levels of the relevant phenomena can make explicit different mechanisms to the learners. In addition, our analysis also shows that although there were differences between high- and low-performing students (in terms of being able to explain population-level behaviors) in the pre-test, these differences disappeared in the post-test.

  9. Core commands across airway facilities systems.

    DOT National Transportation Integrated Search

    2003-05-01

    This study takes a high-level approach to evaluate computer systems without regard to the specific method of : interaction. This document analyzes the commands that Airway Facilities (AF) use across different systems and : the meanings attributed to ...

  10. Argonne Simulation Framework for Intelligent Transportation Systems

    DOT National Transportation Integrated Search

    1996-01-01

    A simulation framework has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS). The simulator is designed to run on parallel computers and distribu...

  11. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  12. Computer-based training for safety: comparing methods with older and younger workers.

    PubMed

    Wallen, Erik S; Mulloy, Karen B

    2006-01-01

    Computer-based safety training is becoming more common and is being delivered to an increasingly aging workforce. Aging results in a number of changes that make it more difficult to learn from certain types of computer-based training. Instructional designs derived from cognitive learning theories may overcome some of these difficulties. Three versions of computer-based respiratory safety training were shown to older and younger workers who then took a high and a low level learning test. Younger workers did better overall. Both older and younger workers did best with the version containing text with pictures and audio narration. Computer-based training with pictures and audio narration may be beneficial for workers over 45 years of age. Computer-based safety training has advantages but workers of different ages may benefit differently. Computer-based safety programs should be designed and selected based on their ability to effectively train older as well as younger learners.

  13. Resource Provisioning in SLA-Based Cluster Computing

    NASA Astrophysics Data System (ADS)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  14. The numerical dynamic for highly nonlinear partial differential equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1992-01-01

    Problems associated with the numerical computation of highly nonlinear equations in computational fluid dynamics are set forth and analyzed in terms of the potential ranges of spurious behaviors. A reaction-convection equation with a nonlinear source term is employed to evaluate the effects related to spatial and temporal discretizations. The discretization of the source term is described according to several methods, and the various techniques are shown to have a significant effect on the stability of the spurious solutions. Traditional linearized stability analyses cannot provide the level of confidence required for accurate fluid dynamics computations, and the incorporation of nonlinear analysis is proposed. Nonlinear analysis based on nonlinear dynamical systems complements the conventional linear approach and is valuable in the analysis of hypersonic aerodynamics and combustion phenomena.

  15. Parallelizing serial code for a distributed processing environment with an application to high frequency electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Work, Paul R.

    1991-12-01

    This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.

  16. Hemothorax with a high carbohydrate antigen 19-9 level caused by a bronchogenic cyst.

    PubMed

    Tsuzuku, Akifumi; Asano, Fumihiro; Murakami, Anri; Masuda, Atsunori; Sobajima, Takuya; Matsuno, Yoshihiko; Matsumoto, Shinsuke; Mori, Yoshio; Takiya, Hiroshi; Iwata, Hitoshi

    2014-01-01

    A 58-year-old man presented with right-sided chest pain. Radiography and computed tomography showed a pleural effusion in the right chest and a mass in the right hilum. Thoracentesis showed a hemothorax. The carbohydrate antigen (CA) 19-9 level in the pleural effusion was very high, requiring differentiation from malignancy. Positron emission tomography showed no significant fluorodeoxy glucose (FDG) accumulation. Magnetic resonance imaging revealed a cystic lesion. The tumor was resected for both a diagnosis and treatment. A pathological examination demonstrated a bronchogenic cyst. An immunohistochemical study suggested that the cyst was the source of the hemothorax and the high CA19-9 level.

  17. High-resolution x-ray computed tomography to understand ruminant phylogeny

    NASA Astrophysics Data System (ADS)

    Costeur, Loic; Schulz, Georg; Müller, Bert

    2014-09-01

    High-resolution X-ray computed tomography has become a vital technique to study fossils down to the true micrometer level. Paleontological research requires the non-destructive analysis of internal structures of fossil specimens. We show how X-ray computed tomography enables us to visualize the inner ear of extinct and extant ruminants without skull destruction. The inner ear, a sensory organ for hearing and balance has a rather complex three-dimensional morphology and thus provides relevant phylogenetical information what has been to date essentially shown in primates. We made visible the inner ears of a set of living and fossil ruminants using the phoenix x-ray nanotom®m (GE Sensing and Inspection Technologies GmbH). Because of the high absorbing objects a tungsten target was used and the experiments were performed with maximum accelerating voltage of 180 kV and a beam current of 30 μA. Possible stem ruminants of the living families are known in the fossil record but extreme morphological convergences in external structures such as teeth is a strong limitation to our understanding of the evolutionary history of this economically important group of animals. We thus investigate the inner ear to assess its phylogenetical potential for ruminants and our first results show strong family-level morphological differences.

  18. GA-optimization for rapid prototype system demonstration

    NASA Technical Reports Server (NTRS)

    Kim, Jinwoo; Zeigler, Bernard P.

    1994-01-01

    An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.

  19. The reported preparedness and disposition by students in a Nigerian university towards the use of information technology for medical education.

    PubMed

    Fadeyi, A; Desalu, O O; Ameen, A; Adeboye, A N Muhammed

    2010-01-01

    The computer and information technology (IT) revolution have transformed modern health care systems in the areas of communication, storage, retrieval of medical information and teaching, but little is known about IT skill and use in most developing nations. The aim of this study has been to evaluate the reported preparedness and disposition by medical students in a Nigerian university toward the use of IT for medical education. A self-administered structured questionnaire containing 24 items was used to obtain information from medical students in the University of Ilorin, Nigeria on their level of computer usage, knowledge of computer software and hardware, availability and access to computer, possession of personal computer and e-mail address, preferred method of medical education and the use of computer as a supplement to medical education. Out of 479 medical students, 179 (37.4%) had basic computer skills, 209 (43.6%) had intermediate skills and 58(12.1%) had advanced computer skills. Three hundred and thirty (68.9%) have access to computer and 451(94.2%) have e-mail addresses. For medical teaching, majority (83.09%), preferred live lecture, 56.78% lecture videos, 35.1% lecture handout on web site and 410 (85.6%) wants computer as a supplement to live lectures. Less than half (39.5%) wants laptop acquisition to be mandatory. Students with advanced computer skills were well prepared and disposed to IT than those with basic computer skill. The findings revealed that the medical students with advanced computer skills were well prepared and disposed to IT based medical education. Therefore, high level of computer skill is required for them to be prepared and favorably disposed to IT based medical education.

  20. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de

    2017-04-15

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less

  1. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-04-01

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.

  2. The Search for Specialists and Managers. Staff Shortage in Germany.

    ERIC Educational Resources Information Center

    Stahl, Klaus, Ed.

    1999-01-01

    Despite its high unemployment level, Germany is experiencing a shortage of specialists and managers. Germany's need for highly qualified information technology (IT) workers and engineers is particularly great. Approximately 10,000 posts for computer scientists and IT specialists remained vacant in 1998. Because of the shortage of such specialists,…

  3. Mechanical Modeling and Computer Simulation of Protein Folding

    ERIC Educational Resources Information Center

    Prigozhin, Maxim B.; Scott, Gregory E.; Denos, Sharlene

    2014-01-01

    In this activity, science education and modern technology are bridged to teach students at the high school and undergraduate levels about protein folding and to strengthen their model building skills. Students are guided from a textbook picture of a protein as a rigid crystal structure to a more realistic view: proteins are highly dynamic…

  4. State of Washington Computer Use Survey.

    ERIC Educational Resources Information Center

    Beal, Jack L.; And Others

    This report presents the results of a spring 1982 survey of a random sample of Washington public schools which separated findings according to school level (elementary, middle, junior high, or high school) and district size (either less than or greater than 2,000 enrollment). A brief review of previous studies and a description of the survey…

  5. Dopamine Receptor DOP-4 Modulates Habituation to Repetitive Photoactivation of a "C. elegans" Polymodal Nociceptor

    ERIC Educational Resources Information Center

    Ardiel, Evan L.; Giles, Andrew C.; Yu, Alex J.; Lindsay, Theodore H.; Lockery, Shawn R.; Rankin, Catharine H.

    2016-01-01

    Habituation is a highly conserved phenomenon that remains poorly understood at the molecular level. Invertebrate model systems, like "Caenorhabditis elegans," can be a powerful tool for investigating this fundamental process. Here we established a high-throughput learning assay that used real-time computer vision software for behavioral…

  6. Women and Minorities in High-Tech Careers. ERIC Digest No. 226.

    ERIC Educational Resources Information Center

    Brown, Bettina Lankard

    Women and minorities are underrepresented in technology-related careers for many reasons, including lack of access, level of math and science achievement, and emotional and social attitudes about computer capabilities. Schools and teachers can use the following strategies to attract women and minorities to high-tech careers and prepare them for…

  7. Carbon-oxygen clusters as hypothetical high energy-density materials

    NASA Astrophysics Data System (ADS)

    Evangelisti, Stefano

    1997-05-01

    An an initio investigation on the hypothetical systems C nO n ( n = 2, 3, 4) is presented. Calculations have been performed at SCF and MP2 level, using a [3s2p1 d] basis set on each atom. At this level of approximation, two metastable structures have been found. They are C 2O 2 with C 2v symmetry (an irregular tetrahedron) and C 4O 4 with T d symmetry (two regular tetrahedra copenetrating each other). On the other hand, the search of local minima with C nv structures has failed for n = 3, 4. For the two metastable structures also larger basis sets have been used, up to [4s3p2d1f] (in the case of C 4O 4, at SCF level only). The computed energy release of the dissociation reaction (C nO n → n CO) of the two metastable structures is very high (about 100 kcal per mole of CO produced). It is of the same magnitude of the energy computed for the corresponding isoelectronic structures N 4 and N 8. If these or similar CO-clusters could be synthesized, these systems are candidates to be high energy-density materials (HEDM).

  8. Segmentectomy for giant pulmonary sclerosing haemangiomas with high serum KL-6 levels

    PubMed Central

    Kuroda, Hiroaki; Mun, Mingyon; Okumura, Sakae; Nakagawa, Ken

    2012-01-01

    We describe a 61-year old female patient with a giant pulmonary sclerosing haemangioma (PSH) and an extremely high preoperative serum KL-6 level. During an annual health screening, the patient showed a posterior mediastinal mass on chest radiography. Chest computed tomography and magnetic resonance imaging revealed a well-circumscribed 60 mm diameter nodule with a marked contrast enhancement in the left lower lobe. The preoperative serum KL-6 level was elevated to 8204 U/ml. We performed a four-port thoracoscopic basal segmentectomy and lymph node sampling for diagnosis and therapy. The postoperative diagnosis showed PSH. The serum KL-6 level decreased dramatically with tumour resection. To the best of our knowledge, this is the first report of a patient with PSH showing a high serum KL-6 level. PMID:22454483

  9. A software methodology for compiling quantum programs

    NASA Astrophysics Data System (ADS)

    Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias

    2018-04-01

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.

  10. Parallel Algorithms for the Exascale Era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this workmore » has been done by undergraduates and published in leading scientific journals.« less

  11. Three-Dimensional Effects in Multi-Element High Lift Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; LeeReusch, Elizabeth M.; Watson, Ralph D.

    2003-01-01

    In an effort to discover the causes for disagreement between previous two-dimensional (2-D) computations and nominally 2-D experiment for flow over the three-element McDonnell Douglas 30P-30N airfoil configuration at high lift, a combined experimental/CFD investigation is described. The experiment explores several different side-wall boundary layer control venting patterns, documents venting mass flow rates, and looks at corner surface flow patterns. The experimental angle of attack at maximum lift is found to be sensitive to the side-wall venting pattern: a particular pattern increases the angle of attack at maximum lift by at least 2 deg. A significant amount of spanwise pressure variation is present at angles of attack near maximum lift. A CFD study using three-dimensional (3-D) structured-grid computations, which includes the modeling of side-wall venting, is employed to investigate 3-D effects on the flow. Side-wall suction strength is found to affect the angle at which maximum lift is predicted. Maximum lift in the CFD is shown to be limited by the growth of an off-body corner flow vortex and consequent increase in spanwise pressure variation and decrease in circulation. The 3-D computations with and without wall venting predict similar trends to experiment at low angles of attack, but either stall too early or else overpredict lift levels near maximum lift by as much as 5%. Unstructured-grid computations demonstrate that mounting brackets lower the lift levels near maximum lift conditions.

  12. Three-Dimensional Effects on Multi-Element High Lift Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Watson, Ralph D.

    2002-01-01

    In an effort to discover the causes for disagreement between previous 2-D computations and nominally 2-D experiment for flow over the 3-clement McDonnell Douglas 30P-30N airfoil configuration at high lift, a combined experimental/CFD investigation is described. The experiment explores several different side-wall boundary layer control venting patterns, document's venting mass flow rates, and looks at corner surface flow patterns. The experimental angle of attack at maximum lift is found to be sensitive to the side wall venting pattern: a particular pattern increases the angle of attack at maximum lift by at least 2 deg. A significant amount of spanwise pressure variation is present at angles of attack near maximum lift. A CFD study using 3-D structured-grid computations, which includes the modeling of side-wall venting, is employed to investigate 3-D effects of the flow. Side-wall suction strength is found to affect the angle at which maximum lift is predicted. Maximum lift in the CFD is shown to be limited by the growth of all off-body corner flow vortex and consequent increase in spanwise pressure variation and decrease in circulation. The 3-D computations with and without wall venting predict similar trends to experiment at low angles of attack, but either stall too earl or else overpredict lift levels near maximum lift by as much as 5%. Unstructured-grid computations demonstrate that mounting brackets lower die the levels near maximum lift conditions.

  13. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modelingmore » flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.« less

  14. System-Level Virtualization Research at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J

    2010-01-01

    System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less

  15. Energetics using the single point IMOMO (integrated molecular orbital+molecular orbital) calculations: Choices of computational levels and model system

    NASA Astrophysics Data System (ADS)

    Svensson, Mats; Humbel, Stéphane; Morokuma, Keiji

    1996-09-01

    The integrated MO+MO (IMOMO) method, recently proposed for geometry optimization, is tested for accurate single point calculations. The principle idea of the IMOMO method is to reproduce results of a high level MO calculation for a large ``real'' system by dividing it into a small ``model'' system and the rest and applying different levels of MO theory for the two parts. Test examples are the activation barrier of the SN2 reaction of Cl-+alkyl chlorides, the C=C double bond dissociation of olefins and the energy of reaction for epoxidation of benzene. The effects of basis set and method in the lower level calculation as well as the effects of the choice of model system are investigated in detail. The IMOMO method gives an approximation to the high level MO energetics on the real system, in most cases with very small errors, with a small additional cost over the low level calculation. For instance, when the MP2 (Møller-Plesset second-order perturbation) method is used as the lower level method, the IMOMO method reproduces the results of very high level MO method within 2 kcal/mol, with less than 50% of additional computer time, for the first two test examples. When the HF (Hartree-Fock) method is used as the lower level method, it is less accurate and depends more on the choice of model system, though the improvement over the HF energy is still very significant. Thus the IMOMO single point calculation provides a method for obtaining reliable local energetics such as bond energies and activation barriers for a large molecular system.

  16. Determining high touch areas in the operating room with levels of contamination.

    PubMed

    Link, Terri; Kleiner, Catherine; Mancuso, Mary P; Dziadkowiec, Oliwier; Halverson-Carpenter, Katherine

    2016-11-01

    The Centers for Disease Control and Prevention put forth the recommendation to clean areas considered high touch more frequently than minimal touch surfaces. The operating room was not included in these recommendations. The purpose of this study was to determine the most frequently touched surfaces in the operating room and their level of contamination. Phase 1 was a descriptive study to identify high touch areas in the operating room. In phase 2, high touch areas determined in phase 1 were cultured to determine if high touch areas observed were also highly contaminated and if they were more contaminated than a low touch surface. The 5 primary high touch surfaces in order were the anesthesia computer mouse, OR bed, nurse computer mouse, OR door, and anesthesia medical cart. Using the OR light as a control, this study demonstrated that a low touch area was less contaminated than the high touch areas with the exception of the OR bed. Based on information and data collected in this study, it is recommended that an enhanced cleaning protocol be established based on the most frequently touched surfaces in the operating room. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  17. Computation of rainfall erosivity from daily precipitation amounts.

    PubMed

    Beguería, Santiago; Serrano-Notivoli, Roberto; Tomas-Burguera, Miquel

    2018-10-01

    Rainfall erosivity is an important parameter in many erosion models, and the EI30 defined by the Universal Soil Loss Equation is one of the best known erosivity indices. One issue with this and other erosivity indices is that they require continuous breakpoint, or high frequency time interval, precipitation data. These data are rare, in comparison to more common medium-frequency data, such as daily precipitation data commonly recorded by many national and regional weather services. Devising methods for computing estimates of rainfall erosivity from daily precipitation data that are comparable to those obtained by using high-frequency data is, therefore, highly desired. Here we present a method for producing such estimates, based on optimal regression tools such as the Gamma Generalised Linear Model and universal kriging. Unlike other methods, this approach produces unbiased and very close to observed EI30, especially when these are aggregated at the annual level. We illustrate the method with a case study comprising more than 1500 high-frequency precipitation records across Spain. Although the original records have a short span (the mean length is around 10 years), computation of spatially-distributed upscaling parameters offers the possibility to compute high-resolution climatologies of the EI30 index based on currently available, long-span, daily precipitation databases. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Toward a Computational Neuropsychology of High-Level Vision.

    DTIC Science & Technology

    1984-08-20

    known as visual agnosia ’ (also called "mindblindness’)l this patient failed to *recognize her nurses, got lost frequently when travelling familiar routes...visual agnosia are not blind: these patients can compare two shapes reliably when Computational neuropsychology 16 both are visible, but they cannot...visually recognize what an object is (although many can recognize objects by touch). This sort of agnosia has been well-documented in the literature (see

  19. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    DTIC Science & Technology

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  20. Integrated Data and Control Level Fault Tolerance Techniques for Signal Processing Computer Design

    DTIC Science & Technology

    1990-09-01

    TOLERANCE TECHNIQUES FOR SIGNAL PROCESSING COMPUTER DESIGN G. Robert Redinbo I. INTRODUCTION High-speed signal processing is an important application of...techniques and mathematical approaches will be expanded later to the situation where hardware errors and roundoff and quantization noise affect all...detect errors equal in number to the degree of g(X), the maximum permitted by the Singleton bound [13]. Real cyclic codes, primarily applicable to

  1. A comparative trial of paper-and-pencil versus computer administration of the Quality of Life in Reflux and Dyspepsia (QOLRAD) questionnaire.

    PubMed

    Kleinman, L; Leidy, N K; Crawley, J; Bonomi, A; Schoenfeld, P

    2001-02-01

    Although most health-related quality of life questionnaires are self-administered by means of paper and pencil, new technologies for automated computer administration are becoming more readily available. Novel methods of instrument administration must be assessed for score equivalence in addition to consistency in reliability and validity. The present study compared the psychometric characteristics (score equivalence and structure, internal consistency, and reproducibility reliability and construct validity) of the Quality of Life in Reflux And Dyspepsia (QOLRAD) questionnaire when self-administered by means of paper and pencil versus touch-screen computer. The influence of age, education, and prior experience with computers on score equivalence was also examined. This crossover trial randomized 134 patients with gastroesophageal reflux disease to 1 of 2 groups: paper-and-pencil questionnaire administration followed by computer administration or computer administration followed by use of paper and pencil. To minimize learning effects and respondent fatigue, administrations were scheduled 3 days apart. A random sample of 32 patients participated in a 1-week reproducibility evaluation of the computer-administered QOLRAD. QOLRAD scores were equivalent across the 2 methods of administration regardless of subject age, education, and prior computer use. Internal consistency levels were very high (alpha = 0.93-0.99). Interscale correlations were strong and generally consistent across methods (r = 0.7-0.87). Correlations between the QOLRAD and Short Form 36 (SF-36) were high, with no significant differences by method. Test-retest reliability of the computer-administered QOLRAD was also very high (ICC = 0.93-0.96). Results of the present study suggest that the QOLRAD is reliable and valid when self-administered by means of computer touch-screen or paper and pencil.

  2. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilk, Todd

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  3. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE PAGES

    Yilk, Todd

    2018-02-17

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  4. Exploring the Integration of Computational Modeling in the ASU Modeling Curriculum

    NASA Astrophysics Data System (ADS)

    Schatz, Michael; Aiken, John; Burk, John; Caballero, Marcos; Douglas, Scott; Thoms, Brian

    2012-03-01

    We describe the implementation of computational modeling in a ninth grade classroom in the context of the Arizona Modeling Instruction physics curriculum. Using a high-level programming environment (VPython), students develop computational models to predict the motion of objects under a variety of physical situations (e.g., constant net force), to simulate real world phenomenon (e.g., car crash), and to visualize abstract quantities (e.g., acceleration). We discuss how VPython allows students to utilize all four structures that describe a model as given by the ASU Modeling Instruction curriculum. Implications for future work will also be discussed.

  5. Computer-assisted instruction and diagnosis of radiographic findings.

    PubMed

    Harper, D; Butler, C; Hodder, R; Allman, R; Woods, J; Riordan, D

    1984-04-01

    Recent advances in computer technology, including high bit-density storage, digital imaging, and the ability to interface microprocessors with videodisk, create enormous opportunities in the field of medical education. This program, utilizing a personal computer, videodisk, BASIC language, a linked textfile system, and a triangulation approach to the interpretation of radiographs developed by Dr. W. L. Thompson, can enable the user to engage in a user-friendly, dynamic teaching program in radiology, applicable to various levels of expertise. Advantages include a relatively more compact and inexpensive system with rapid access and ease of revision which requires little instruction to the user.

  6. Quantum Computing Architectural Design

    NASA Astrophysics Data System (ADS)

    West, Jacob; Simms, Geoffrey; Gyure, Mark

    2006-03-01

    Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.

  7. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    PubMed Central

    Zhu, Hao; Sun, Yan; Rajagopal, Gunaretnam; Mondry, Adrian; Dhar, Pawan

    2004-01-01

    Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods described. PMID:15339335

  8. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    PubMed

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  9. Locating hardware faults in a parallel computer

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  10. Pattern-based integer sample motion search strategies in the context of HEVC

    NASA Astrophysics Data System (ADS)

    Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas

    2015-09-01

    The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.

  11. Computational Design of Ligand Binding Proteins with High Affinity and Selectivity

    PubMed Central

    Dou, Jiayi; Doyle, Lindsey; Nelson, Jorgen W.; Schena, Alberto; Jankowski, Wojciech; Kalodimos, Charalampos G.; Johnsson, Kai; Stoddard, Barry L.; Baker, David

    2014-01-01

    The ability to design proteins with high affinity and selectivity for any given small molecule would have numerous applications in biosensing, diagnostics, and therapeutics, and is a rigorous test of our understanding of the physiochemical principles that govern molecular recognition phenomena. Attempts to design ligand binding proteins have met with little success, however, and the computational design of precise molecular recognition between proteins and small molecules remains an “unsolved problem”1. We describe a general method for the computational design of small molecule binding sites with pre-organized hydrogen bonding and hydrophobic interfaces and high overall shape complementary to the ligand, and use it to design protein binding sites for the steroid digoxigenin (DIG). Of 17 designs that were experimentally characterized, two bind DIG; the highest affinity design has the lowest predicted interaction energy and the most pre-organized binding site in the set. A comprehensive binding-fitness landscape of this design generated by library selection and deep sequencing was used to guide optimization of binding affinity to a picomolar level, and two X-ray co-crystal structures of optimized complexes show atomic level agreement with the design models. The designed binder has a high selectivity for DIG over the related steroids digitoxigenin, progesterone, and β-estradiol, which can be reprogrammed through the designed hydrogen-bonding interactions. Taken together, the binding fitness landscape, co-crystal structures, and thermodynamic binding parameters illustrate how increases in binding affinity can result from distal sequence changes that limit the protein ensemble to conformers making the most energetically favorable interactions with the ligand. The computational design method presented here should enable the development of a new generation of biosensors, therapeutics, and diagnostics. PMID:24005320

  12. Computer Proficiency Questionnaire: Assessing Low and High Computer Proficient Seniors

    PubMed Central

    Boot, Walter R.; Charness, Neil; Czaja, Sara J.; Sharit, Joseph; Rogers, Wendy A.; Fisk, Arthur D.; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-01-01

    Purpose of the Study: Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. Design and Methods: To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. Results: The CPQ demonstrated excellent reliability (Cronbach’s α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. Implications: The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. PMID:24107443

  13. Spaceborne computer executive routine functional design specification. Volume 2: Computer executive design for space station/base

    NASA Technical Reports Server (NTRS)

    Kennedy, J. R.; Fitzpatrick, W. S.

    1971-01-01

    The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.

  14. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.

    PubMed

    Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia

    2016-03-08

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.

  15. Formal design and verification of a reliable computing platform for real-time control. Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Divito, Ben L.; Butler, Ricky W.; Caldwell, James L.

    1990-01-01

    A high-level design is presented for a reliable computing platform for real-time control applications. Design tradeoffs and analyses related to the development of the fault-tolerant computing platform are discussed. The architecture is formalized and shown to satisfy a key correctness property. The reliable computing platform uses replicated processors and majority voting to achieve fault tolerance. Under the assumption of a majority of processors working in each frame, it is shown that the replicated system computes the same results as a single processor system not subject to failures. Sufficient conditions are obtained to establish that the replicated system recovers from transient faults within a bounded amount of time. Three different voting schemes are examined and proved to satisfy the bounded recovery time conditions.

  16. New broadband square-law detector

    NASA Technical Reports Server (NTRS)

    Reid, M. S.; Gardner, R. A.; Stelzried, C. T.

    1975-01-01

    Compact device has wide dynamic range, accurate square-law response, good thermal stability, high-level dc output with immunity to ground-loop problems, ability to insert known time constants for radiometric applications, and fast response times compatible with computer systems.

  17. Python for Ecology

    EPA Science Inventory

    Python is a high-level scripting language that is becoming increasingly popular for scientific computing. This all-day workshop is designed to introduce the basics of Python programming to ecologists. Some scripting/programming experience is recommended (e.g. familiarity with R)....

  18. Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications

    NASA Astrophysics Data System (ADS)

    Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon

    1997-04-01

    A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.

  19. Fully Convolutional Architecture for Low-Dose CT Image Noise Reduction

    NASA Astrophysics Data System (ADS)

    Badretale, S.; Shaker, F.; Babyn, P.; Alirezaie, J.

    2017-10-01

    One of the critical topics in medical low-dose Computed Tomography (CT) imaging is how best to maintain image quality. As the quality of images decreases with lowering the X-ray radiation dose, improving image quality is extremely important and challenging. We have proposed a novel approach to denoise low-dose CT images. Our algorithm learns directly from an end-to-end mapping from the low-dose Computed Tomography images for denoising the normal-dose CT images. Our method is based on a deep convolutional neural network with rectified linear units. By learning various low-level to high-level features from a low-dose image the proposed algorithm is capable of creating a high-quality denoised image. We demonstrate the superiority of our technique by comparing the results with two other state-of-the-art methods in terms of the peak signal to noise ratio, root mean square error, and a structural similarity index.

  20. Parametric Study of Pulse-Combustor-Driven Ejectors at High-Pressure

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Paxson, Daniel E.; Perkins, Hugh D.

    2015-01-01

    Pulse-combustor configurations developed in recent studies have demonstrated performance levels at high-pressure operating conditions comparable to those observed at atmospheric conditions. However, problems related to the way fuel was being distributed within the pulse combustor were still limiting performance. In the first part of this study, new configurations are investigated computationally aimed at improving the fuel distribution and performance of the pulse-combustor. Subsequent sections investigate the performance of various pulse-combustor driven ejector configurations operating at highpressure conditions, focusing on the effects of fuel equivalence ratio and ejector throat area. The goal is to design pulse-combustor-ejector configurations that maximize pressure gain while achieving a thermal environment acceptable to a turbine, and at the same time maintain acceptable levels of NOx emissions and flow non-uniformities. The computations presented here have demonstrated pressure gains of up to 2.8%.

  1. Rapid automated classification of anesthetic depth levels using GPU based parallelization of neural networks.

    PubMed

    Peker, Musa; Şen, Baha; Gürüler, Hüseyin

    2015-02-01

    The effect of anesthesia on the patient is referred to as depth of anesthesia. Rapid classification of appropriate depth level of anesthesia is a matter of great importance in surgical operations. Similarly, accelerating classification algorithms is important for the rapid solution of problems in the field of biomedical signal processing. However numerous, time-consuming mathematical operations are required when training and testing stages of the classification algorithms, especially in neural networks. In this study, to accelerate the process, parallel programming and computing platform (Nvidia CUDA) facilitates dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU) was utilized. The system was employed to detect anesthetic depth level on related electroencephalogram (EEG) data set. This dataset is rather complex and large. Moreover, the achieving more anesthetic levels with rapid response is critical in anesthesia. The proposed parallelization method yielded high accurate classification results in a faster time.

  2. Computational Analysis of Hybrid Two-Photon Absorbers with Excited State Absorption

    DTIC Science & Technology

    2007-03-01

    level. This hybrid arrangement creates a complex dynamical system in which the electron carrier concentration of every photo-activated energy level...spatiotemporal details of the electron population densities of each photo-activated energy level as well as the pulse shape in space and time. The main...experiments at low input energy . However, further additions must be done to the calculation of the optical path for high input energy . 1 15. SUBJECT TERM

  3. Low Thrust Orbital Maneuvers Using Ion Propulsion

    NASA Astrophysics Data System (ADS)

    Ramesh, Eric

    2011-10-01

    Low-thrust maneuver options, such as electric propulsion, offer specific challenges within mission-level Modeling, Simulation, and Analysis (MS&A) tools. This project seeks to transition techniques for simulating low-thrust maneuvers from detailed engineering level simulations such as AGI's Satellite ToolKit (STK) Astrogator to mission level simulations such as the System Effectiveness Analysis Simulation (SEAS). Our project goals are as follows: A) Assess different low-thrust options to achieve various orbital changes; B) Compare such approaches to more conventional, high-thrust profiles; C) Compare computational cost and accuracy of various approaches to calculate and simulate low-thrust maneuvers; D) Recommend methods for implementing low-thrust maneuvers in high-level mission simulations; E) prototype recommended solutions.

  4. High-Resolution Computed Tomography and Pulmonary Function Findings of Occupational Arsenic Exposure in Workers.

    PubMed

    Ergün, Recai; Evcik, Ender; Ergün, Dilek; Ergan, Begüm; Özkan, Esin; Gündüz, Özge

    2017-05-05

    The number of studies where non-malignant pulmonary diseases are evaluated after occupational arsenic exposure is very few. To investigate the effects of occupational arsenic exposure on the lung by high-resolution computed tomography and pulmonary function tests. Retrospective cross-sectional study. In this study, 256 workers with suspected respiratory occupational arsenic exposure were included, with an average age of 32.9±7.8 years and an average of 3.5±2.7 working years. Hair and urinary arsenic levels were analysed. High-resolution computed tomography and pulmonary function tests were done. In workers with occupational arsenic exposure, high-resolution computed tomography showed 18.8% pulmonary involvement. In pulmonary involvement, pulmonary nodule was the most frequently seen lesion (64.5%). The other findings of pulmonary involvement were 18.8% diffuse interstitial lung disease, 12.5% bronchiectasis, and 27.1% bullae-emphysema. The mean age of patients with pulmonary involvement was higher and as they smoked more. The pulmonary involvement was 5.2 times higher in patients with skin lesions because of arsenic. Diffusing capacity of lung for carbon monoxide was significantly lower in patients with pulmonary involvement. Besides lung cancer, chronic occupational inhalation of arsenic exposure may cause non-malignant pulmonary findings such as bronchiectasis, pulmonary nodules and diffuse interstitial lung disease. So, in order to detect pulmonary involvement in the early stages, workers who experience occupational arsenic exposure should be followed by diffusion test and high-resolution computed tomography.

  5. Focus on Methodology: Beyond paper and pencil: Conducting computer-assisted data collection with adolescents in group settings.

    PubMed

    Raffaelli, Marcela; Armstrong, Jessica; Tran, Steve P; Griffith, Aisha N; Walker, Kathrin; Gutierrez, Vanessa

    2016-06-01

    Computer-assisted data collection offers advantages over traditional paper and pencil measures; however, little guidance is available regarding the logistics of conducting computer-assisted data collection with adolescents in group settings. To address this gap, we draw on our experiences conducting a multi-site longitudinal study of adolescent development. Structured questionnaires programmed on laptop computers using Audio Computer Assisted Self-Interviewing (ACASI) were administered to groups of adolescents in community-based and afterschool programs. Although implementing ACASI required additional work before entering the field, we benefited from reduced data processing time, high data quality, and high levels of youth motivation. Preliminary findings from an ethnically diverse sample of 265 youth indicate favorable perceptions of using ACASI. Using our experiences as a case study, we provide recommendations on selecting an appropriate data collection device (including hardware and software), preparing and testing the ACASI, conducting data collection in the field, and managing data. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  6. The quantum computer game: citizen science

    NASA Astrophysics Data System (ADS)

    Damgaard, Sidse; Mølmer, Klaus; Sherson, Jacob

    2013-05-01

    Progress in the field of quantum computation is hampered by daunting technical challenges. Here we present an alternative approach to solving these by enlisting the aid of computer players around the world. We have previously examined a quantum computation architecture involving ultracold atoms in optical lattices and strongly focused tweezers of light. In The Quantum Computer Game (see http://www.scienceathome.org/), we have encapsulated the time-dependent Schrödinger equation for the problem in a graphical user interface allowing for easy user input. Players can then search the parameter space with real-time graphical feedback in a game context with a global high-score that rewards short gate times and robustness to experimental errors. The game which is still in a demo version has so far been tried by several hundred players. Extensions of the approach to other models such as Gross-Pitaevskii and Bose-Hubbard are currently under development. The game has also been incorporated into science education at high-school and university level as an alternative method for teaching quantum mechanics. Initial quantitative evaluation results are very positive. AU Ideas Center for Community Driven Research, CODER.

  7. Success in introductory college physics: The role of gender, high school preparation, and student learning perceptions

    NASA Astrophysics Data System (ADS)

    Chen, Jean Chi-Jen

    Physics is fundamental for science, engineering, medicine, and for understanding many phenomena encountered in people's daily lives. The purpose of this study was to investigate the relationships between student success in college-level introductory physics courses and various educational and background characteristics. The primary variables of this study were gender, high school mathematics and science preparation, preference and perceptions of learning physics, and performance in introductory physics courses. Demographic characteristics considered were age, student grade level, parents' occupation and level of education, high school senior grade point average, and educational goals. A Survey of Learning Preference and Perceptions was developed to collect the information for this study. A total of 267 subjects enrolled in six introductory physics courses, four algebra-based and two calculus-based, participated in the study conducted during Spring Semester 2002. The findings from the algebra-based physics courses indicated that participant's educational goal, high school senior GPA, father's educational level, mother's educational level, and mother's occupation in the area of science, engineering, or computer technology were positively related to performance while participant age was negatively related. Biology preparation, mathematics preparation, and additional mathematics and science preparation in high school were also positively related to performance. The relationships between the primary variables and performance in calculus-based physics courses were limited to high school senior year GPA and high school physics preparation. Findings from all six courses indicated that participant's educational goal, high school senior GPA, father's educational level, and mother's occupation in the area of science, engineering, or computer technology, high school preparation in mathematics, biology, and the completion of additional mathematics and science courses were positively related to performance. No significant performance differences were found between male and female students. However, there were significant gender differences in physics learning perceptions. Female participants tended to try to understand physics materials and relate the physics problems to real world situations while their male counterparts tended to rely on rote learning and equation application. This study found that participants performed better by trying to understand the physics material and relate physics problems to real world situations. Participants who relied on rote learning did not perform well.

  8. Can aircraft noise less than or equal 115 to dBA adversely affect reproductive outcome in USAF women?

    NASA Astrophysics Data System (ADS)

    Brubaker, P. A.

    1985-06-01

    It has been suggested, mainly through animal studies, that exposure to high noise levels may be associated with lower birth weight, reduced gestational length and other adverse reproductive outcomes. Few studies have been done on humans to show this association. The Air Force employs pregnant women in areas where there is a high potential for exposure to high noise levels. This study proposes a method to determine if there is an association between high frequency noise levels or = 115 dBA and adverse reproductive outcomes through a review of records and self-administered questionnaires in a case-comparison design. Prevelance rates will be calculated and a multiple logistic regression analysis computed for the independent variables that can affect reproduction.

  9. Application of a fast skyline computation algorithm for serendipitous searching problems

    NASA Astrophysics Data System (ADS)

    Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary

    2018-02-01

    Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.

  10. Tempest: GPU-CPU computing for high-throughput database spectral matching.

    PubMed

    Milloy, Jeffrey A; Faherty, Brendan K; Gerber, Scott A

    2012-07-06

    Modern mass spectrometers are now capable of producing hundreds of thousands of tandem (MS/MS) spectra per experiment, making the translation of these fragmentation spectra into peptide matches a common bottleneck in proteomics research. When coupled with experimental designs that enrich for post-translational modifications such as phosphorylation and/or include isotopically labeled amino acids for quantification, additional burdens are placed on this computational infrastructure by shotgun sequencing. To address this issue, we have developed a new database searching program that utilizes the massively parallel compute capabilities of a graphical processing unit (GPU) to produce peptide spectral matches in a very high throughput fashion. Our program, named Tempest, combines efficient database digestion and MS/MS spectral indexing on a CPU with fast similarity scoring on a GPU. In our implementation, the entire similarity score, including the generation of full theoretical peptide candidate fragmentation spectra and its comparison to experimental spectra, is conducted on the GPU. Although Tempest uses the classical SEQUEST XCorr score as a primary metric for evaluating similarity for spectra collected at unit resolution, we have developed a new "Accelerated Score" for MS/MS spectra collected at high resolution that is based on a computationally inexpensive dot product but exhibits scoring accuracy similar to that of the classical XCorr. In our experience, Tempest provides compute-cluster level performance in an affordable desktop computer.

  11. An end-to-end workflow for engineering of biological networks from high-level specifications.

    PubMed

    Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun

    2012-08-17

    We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.

  12. The Ability of Beginning University Chemistry Students to Use ICT (Information and Communication Technology) in Their Learning in 2002

    ERIC Educational Resources Information Center

    Lim, Kieran F.

    2003-01-01

    There is an assumption that high-school students are becoming more computer literate, but published studies of specific skill level are lacking. An anonymous multiple-choice survey self-assessed the ICT (information and communication technology) skills of first-year chemistry students at the beginning of 2002. The general level of ICT skill…

  13. The Influence of Experiencing Success in Math on Math Anxiety, Perceived Math Competence, and Math Performance

    ERIC Educational Resources Information Center

    Jansen, Brenda R. J.; Louwerse, Jolien; Straatemeier, Marthe; Van der Ven, Sanne H. G.; Klinkenberg, Sharon; Van der Maas, Han L. J.

    2013-01-01

    It was investigated whether children would experience less math anxiety and feel more competent when they, independent of ability level, experienced high success rates in math. Comparable success rates were achieved by adapting problem difficulty to individuals' ability levels with a computer-adaptive program. A total of 207 children (grades 3-6)…

  14. Extending Automatic Parallelization to Optimize High-Level Abstractions for Multicore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D J; Willcock, J J

    2008-12-12

    Automatic introduction of OpenMP for sequential applications has attracted significant attention recently because of the proliferation of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high-level abstractions, such as STL containers and complex user-defined types, are largely ignored due to the lack of research compilers that are readily able to recognize high-level object-oriented abstractions and leverage their associated semantics. In this paper, we automatically parallelize C++ applications using ROSE, a multiple-language source-to-source compiler infrastructuremore » which preserves the high-level abstractions and gives us access to their semantics. Several representative parallelization candidate kernels are used to explore semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses. Those kernels include an array-base computation loop, a loop with task-level parallelism, and a domain-specific tree traversal. Our work extends the applicability of automatic parallelization to modern applications using high-level abstractions and exposes more opportunities to take advantage of multicore processors.« less

  15. Verification and Validation in a Rapid Software Development Process

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Easterbrook, Steve M.

    1997-01-01

    The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.

  16. Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Anderson, Thomas E.; Ousterhout, John K.; Patterson, David A.

    1991-01-01

    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications.

  17. Verification of Security Policy Enforcement in Enterprise Systems

    NASA Astrophysics Data System (ADS)

    Gupta, Puneet; Stoller, Scott D.

    Many security requirements for enterprise systems can be expressed in a natural way as high-level access control policies. A high-level policy may refer to abstract information resources, independent of where the information is stored; it controls both direct and indirect accesses to the information; it may refer to the context of a request, i.e., the request’s path through the system; and its enforcement point and enforcement mechanism may be unspecified. Enforcement of a high-level policy may depend on the system architecture and the configurations of a variety of security mechanisms, such as firewalls, host login permissions, file permissions, DBMS access control, and application-specific security mechanisms. This paper presents a framework in which all of these can be conveniently and formally expressed, a method to verify that a high-level policy is enforced, and an algorithm to determine a trusted computing base for each resource.

  18. On the impact of approximate computation in an analog DeSTIN architecture.

    PubMed

    Young, Steven; Lu, Junjie; Holleman, Jeremy; Arel, Itamar

    2014-05-01

    Deep machine learning (DML) holds the potential to revolutionize machine learning by automating rich feature extraction, which has become the primary bottleneck of human engineering in pattern recognition systems. However, the heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required to implement large-scale deep learning systems are well suited to custom hardware. Analog computation has demonstrated power efficiency advantages of multiple orders of magnitude relative to digital systems while performing nonideal computations. In this paper, we investigate typical error sources introduced by analog computational elements and their impact on system-level performance in DeSTIN--a compositional deep learning architecture. These inaccuracies are evaluated on a pattern classification benchmark, clearly demonstrating the robustness of the underlying algorithm to the errors introduced by analog computational elements. A clear understanding of the impacts of nonideal computations is necessary to fully exploit the efficiency of analog circuits.

  19. Algorithms in nature: the convergence of systems biology and computational thinking

    PubMed Central

    Navlakha, Saket; Bar-Joseph, Ziv

    2011-01-01

    Computer science and biology have enjoyed a long and fruitful relationship for decades. Biologists rely on computational methods to analyze and integrate large data sets, while several computational methods were inspired by the high-level design principles of biological systems. Recently, these two directions have been converging. In this review, we argue that thinking computationally about biological processes may lead to more accurate models, which in turn can be used to improve the design of algorithms. We discuss the similar mechanisms and requirements shared by computational and biological processes and then present several recent studies that apply this joint analysis strategy to problems related to coordination, network analysis, and tracking and vision. We also discuss additional biological processes that can be studied in a similar manner and link them to potential computational problems. With the rapid accumulation of data detailing the inner workings of biological systems, we expect this direction of coupling biological and computational studies to greatly expand in the future. PMID:22068329

  20. Use of Nonequilibrium Work Methods to Compute Free Energy Differences Between Molecular Mechanical and Quantum Mechanical Representations of Molecular Systems.

    PubMed

    Hudson, Phillip S; Woodcock, H Lee; Boresch, Stefan

    2015-12-03

    Carrying out free energy simulations (FES) using quantum mechanical (QM) Hamiltonians remains an attractive, albeit elusive goal. Renewed efforts in this area have focused on using "indirect" thermodynamic cycles to connect "low level" simulation results to "high level" free energies. The main obstacle to computing converged free energy results between molecular mechanical (MM) and QM (ΔA(MM→QM)), as recently demonstrated by us and others, is differences in the so-called "stiff" degrees of freedom (e.g., bond stretching) between the respective energy surfaces. Herein, we demonstrate that this problem can be efficiently circumvented using nonequilibrium work (NEW) techniques, i.e., Jarzynski's and Crooks' equations. Initial applications of computing ΔA(NEW)(MM→QM), for blocked amino acids alanine and serine as well as to generate butane's potentials of mean force via the indirect QM/MM FES method, showed marked improvement over traditional FES approaches.

  1. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation.

    PubMed

    Gray, Alan; Harlen, Oliver G; Harris, Sarah A; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J; Pearson, Arwen R; Read, Daniel J; Richardson, Robin A

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  2. A brain-computer interface based on self-regulation of gamma-oscillations in the superior parietal cortex

    NASA Astrophysics Data System (ADS)

    Grosse-Wentrup, Moritz; Schölkopf, Bernhard

    2014-10-01

    Objective. Brain-computer interface (BCI) systems are often based on motor- and/or sensory processes that are known to be impaired in late stages of amyotrophic lateral sclerosis (ALS). We propose a novel BCI designed for patients in late stages of ALS that only requires high-level cognitive processes to transmit information from the user to the BCI. Approach. We trained subjects via EEG-based neurofeedback to self-regulate the amplitude of gamma-oscillations in the superior parietal cortex (SPC). We argue that parietal gamma-oscillations are likely to be associated with high-level attentional processes, thereby providing a communication channel that does not rely on the integrity of sensory- and/or motor-pathways impaired in late stages of ALS. Main results. Healthy subjects quickly learned to self-regulate gamma-power in the SPC by alternating between states of focused attention and relaxed wakefulness, resulting in an average decoding accuracy of 70.2%. One locked-in ALS patient (ALS-FRS-R score of zero) achieved an average decoding accuracy significantly above chance-level though insufficient for communication (55.8%). Significance. Self-regulation of gamma-power in the SPC is a feasible paradigm for brain-computer interfacing and may be preserved in late stages of ALS. This provides a novel approach to testing whether completely locked-in ALS patients retain the capacity for goal-directed thinking.

  3. Evaluation of the OpenCL AES Kernel using the Intel FPGA SDK for OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. In this report, we evaluate the performance of the kernel using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board. Compared to the M506 module, the board provides more hardware resources for a larger design exploration space. The kernel performance is measured with the compute kernel throughput, an upper bound to the FPGA throughput. The report presents the experimental results in details. The Appendix lists the kernel source code.« less

  4. [Morphological analysis of alveolar bone of anterior mandible in high-angle skeletal class II and class III malocclusions assessed with cone-beam computed tomography].

    PubMed

    Ma, J; Jiang, J H

    2018-02-18

    To evaluate the difference of features of alveolar bone support under lower anterior teeth between high-angle adults with skeletal class II malocclusions and high-angle adults presenting skeletal class III malocclusions by using cone-beam computed tomography (CBCT). Patients who had taken the images of CBCT were selected from the Peking University School and Hospital of Stomatology between October 2015 and August 2017. The CBCT archives from 62 high-angle adult cases without orthodontic treatment were divided into two groups based on their sagittal jaw relationships: skeletal class II and skeletal class III. vertical bone level (VBL), alveolar bone area (ABA), and the width of alveolar bone were measured respectively at the 2 mm, 4 mm, 6 mm below the cemento-enamel junction (CEJ) level and at the apical level. After that, independent samples t-tests were conducted for statistical comparisons. The ABA of the mandibular alveolar bone in the area of lower anterior teeth was significantly thinner in the patients of skeletal class III than those of skeletal class II, especially in terms of the apical ABA, total ABA on the labial and lingual sides and the ABA at 6 mm below CEJ level on the lingual side (P<0.05). The thickness of the alveolar bone of mandibular anterior teeth was significantly thinner in the subjects of skeletal class III than those of skeletal class II, especially regarding the apical level on the labial and lingual side and at the level of 4 mm, 6 mm below CEJ level on the lingual side (P<0.05). The ABA and the thickness of the alveolar bone of mandibular anterior teeth were significantly thinner in the group of skeletal class III adult patients with high-angle when compared with the sample of high-angle skeletal class II adult cases. We recommend orthodontists to be more cautious in treatment of high-angle skeletal class III patients, especially pay attention to control the torque of lower anterior teeth during forward and backward movement, in case that the apical root might be absorbed or fenestration happen in the area of lower anterior teeth.

  5. Computer and internet use among urban African Americans with type 2 diabetes.

    PubMed

    Jackson, Chandra L; Batts-Turner, Marian L; Falb, Matthew D; Yeh, Hsin-Chieh; Brancati, Frederick L; Gary, Tiffany L

    2005-12-01

    Previous studies have identified a "digital divide" between African Americans and whites, with African Americans having substantially lower rates of Internet use. However, use of the Internet to access health information has not been sufficiently evaluated in this population. Therefore, we conducted a telephone survey to determine the prevalence of computer and Internet use among 457 African American adults with type 2 diabetes. Participants were 78% female, with a mean age of 57 +/- 11 years, and about one-third had a yearly income % $7,500. Forty percent of the participants reported having a computer at home and 46% reported knowing how to use a computer. Most participants (58%) reported that they had, at some point, used a computer, and of those, 40% reported that they used the computer to find health information. In a stratified analysis, participants with lower education levels (

  6. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    PubMed Central

    2010-01-01

    Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an external source for a map of their stored information or for an operational instruction set; rather, they must contain an organizational template conserved within their intra-nuclear architecture that "manipulates" the laws of chemistry and physics into a highly robust instruction set. We propose that the epigenetic structure of the intra-nuclear environment and the non-coding RNA may play the roles of a Biological File Allocation Table (BFAT) and biological operating system (Bio-OS) in eukaryotic cells. Conclusions The comparison of functional and structural characteristics of the DNA complex and the computer hard drive leads to a new descriptive paradigm that identifies the DNA as a dynamic storage system of biological information. This system is embodied in an autonomous operating system that inductively follows organizational structures, data hierarchy and executable operations that are well understood in the computer science industry. Characterizing the "DNA hard drive" in this fashion can lead to insights arising from discrepancies in the descriptive framework, particularly with respect to positing the role of epigenetic processes in an information-processing context. Further expansions arising from this comparison include the view of cells as parallel computing machines and a new approach towards characterizing cellular control systems. PMID:20092652

  7. A comparative approach for the investigation of biological information processing: an examination of the structure and function of computer hard drives and DNA.

    PubMed

    D'Onofrio, David J; An, Gary

    2010-01-21

    The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an external source for a map of their stored information or for an operational instruction set; rather, they must contain an organizational template conserved within their intra-nuclear architecture that "manipulates" the laws of chemistry and physics into a highly robust instruction set. We propose that the epigenetic structure of the intra-nuclear environment and the non-coding RNA may play the roles of a Biological File Allocation Table (BFAT) and biological operating system (Bio-OS) in eukaryotic cells. The comparison of functional and structural characteristics of the DNA complex and the computer hard drive leads to a new descriptive paradigm that identifies the DNA as a dynamic storage system of biological information. This system is embodied in an autonomous operating system that inductively follows organizational structures, data hierarchy and executable operations that are well understood in the computer science industry. Characterizing the "DNA hard drive" in this fashion can lead to insights arising from discrepancies in the descriptive framework, particularly with respect to positing the role of epigenetic processes in an information-processing context. Further expansions arising from this comparison include the view of cells as parallel computing machines and a new approach towards characterizing cellular control systems.

  8. A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows

    DOE PAGES

    Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...

    2015-03-11

    High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less

  9. Information-seeking behavior and computer literacy among resident doctors in Maiduguri, Nigeria.

    PubMed

    Abbas, A D; Abubakar, A M; Omeiza, B; Minoza, K

    2013-01-01

    Resident doctors are key actors in patient management in all the federal training institutions in nigeria. Knowing the information-seeking behavior of this group of doctors and their level of computer knowledge would facilitate informed decision in providing them with the relevant sources of information as well as encouraging the practice of evidence-based medicine. This is to examine information-seeking behavior among resident doctors and analyze its relationship to computer ownership and literacy. A pretested self-administered questionnaire was used to obtain information from the resident doctors in the University of Maiduguri Teaching Hospital (UMTH) and the Federal Neuro-Psychiatry Hospital (FNPH). The data fields requested included the biodata, major source of medical information, level of computer literacy, and computer ownership. Other questions included were their familiarity with basic computer operations as well as versatility on the use of the Internet and possession of an active e-mail address. Out of 109 questionnaires distributed 100 were returned (91.7% response rate). Seventy three of the 100 respondents use printed material as their major source of medical information. Ninety three of the respondents own a laptop, a desktop or both, while 7 have no computers. Ninety-four respondents are computer literate while 6 are computer illiterates. Seventy-five respondents have an e-mail address while 25 do not have e-mail address. Seventy-five search the Internet for information while 25 do not know how to use the Internet. Despite the high computer ownership and literacy rate among resident doctors, the printed material remains their main source of medical information.

  10. Issues in undergraduate education in computational science and high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchioro, T.L. II; Martin, D.

    1994-12-31

    The ever increasing need for mathematical and computational literacy within their society and among members of the work force has generated enormous pressure to revise and improve the teaching of related subjects throughout the curriculum, particularly at the undergraduate level. The Calculus Reform movement is perhaps the best known example of an organized initiative in this regard. The UCES (Undergraduate Computational Engineering and Science) project, an effort funded by the Department of Energy and administered through the Ames Laboratory, is sponsoring an informal and open discussion of the salient issues confronting efforts to improve and expand the teaching of computationalmore » science as a problem oriented, interdisciplinary approach to scientific investigation. Although the format is open, the authors hope to consider pertinent questions such as: (1) How can faculty and research scientists obtain the recognition necessary to further excellence in teaching the mathematical and computational sciences? (2) What sort of educational resources--both hardware and software--are needed to teach computational science at the undergraduate level? Are traditional procedural languages sufficient? Are PCs enough? Are massively parallel platforms needed? (3) How can electronic educational materials be distributed in an efficient way? Can they be made interactive in nature? How should such materials be tied to the World Wide Web and the growing ``Information Superhighway``?« less

  11. Three-dimensional surgical simulation.

    PubMed

    Cevidanes, Lucia H C; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2010-09-01

    In this article, we discuss the development of methods for computer-aided jaw surgery, which allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3-dimensional surface models from cone-beam computed tomography, dynamic cephalometry, semiautomatic mirroring, interactive cutting of bone, and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intraoperative guidance. The system provides further intraoperative assistance with a computer display showing jaw positions and 3-dimensional positioning guides updated in real time during the surgical procedure. The computer-aided surgery system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training, and assessing the difficulties of the surgical procedures before the surgery. Computer-aided surgery can make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. 2010 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  12. Neighbors On Line: Enhancing Global Perspectives and Cultural Sharing with the Internet.

    ERIC Educational Resources Information Center

    Lee, Okhwa; Knupfer, Nancy Nelson

    The purpose of this study was to investigate the status of computer use within Korean and United States schools, and then use the Internet to establish cross cultural communication between schools. It was hoped that through international communication using the Internet, students at the elementary, junior high, and high school levels would gain…

  13. A First Approach to Filament Dynamics

    ERIC Educational Resources Information Center

    Silva, P. E. S.; de Abreu, F. Vistulo; Simoes, R.; Dias, R. G.

    2010-01-01

    Modelling elastic filament dynamics is a topic of high interest due to the wide range of applications. However, it has reached a high level of complexity in the literature, making it unaccessible to a beginner. In this paper we explain the main steps involved in the computational modelling of the dynamics of an elastic filament. We first derive…

  14. Functional language and data flow architectures

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Patel, D. R.; Lang, T.

    1983-01-01

    This is a tutorial article about language and architecture approaches for highly concurrent computer systems based on the functional style of programming. The discussion concentrates on the basic aspects of functional languages, and sequencing models such as data-flow, demand-driven and reduction which are essential at the machine organization level. Several examples of highly concurrent machines are described.

  15. A Systematic Approach to Improving E-Learning Implementations in High Schools

    ERIC Educational Resources Information Center

    Pardamean, Bens; Suparyanto, Teddy

    2014-01-01

    This study was based on the current growing trend of implementing e-learning in high schools. Most endeavors have been inefficient, rendering an objective of determining the initial steps that could be taken to improve these efforts by assessing a student population's computer skill levels and performances in an IT course. Demographic factors were…

  16. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  17. Brief communication: patient satisfaction with the use of tablet computers: a pilot study in two outpatient home dialysis clinics.

    PubMed

    Schick-Makaroff, Kara; Molzahn, Anita

    2014-01-01

    Electronic capture of patients' reports of their health is significant in clinical nephrology research because health-related quality of life (HRQOL) for patients with end-stage renal disease is compromised and assessment by patients of their HRQOL in practice is relatively uncommon. The purpose of this study was to evaluate patient satisfaction with and time involved in administering HRQOL and symptom assessment measures using tablet computers in two outpatient home dialysis clinics. A cross-sectional observational study design was employed. The study was conducted in two home dialysis clinics. Fifty-six patients participated in the study; 35 males (63%) and 21 females (37%) with a mean age of 66 ± 12 (36-90 years old) were included. Forty-nine participants were on peritoneal dialysis (87%), 6 on home hemodialysis (11%), and 1 on nocturnal home hemodialysis (2%). Measures included the Kidney Disease Quality of Life-36 (KDQOL-36), the Edmonton Symptom Assessment Scale (ESAS) and Participant's Level of Satisfaction in Using a Tablet Computer. Using a tablet computer, participants completed the three measures. Descriptive statistics and bivariate correlations were calculated. Participants' satisfaction with use of the tablet computer was high; 66% were "very satisfied", 7% "satisfied", 2% "slightly satisfied", and 18% "neutral". On the 7-point Likert-type scale, the mean satisfaction score was 5.11 (SD = 1.6). Mean time to complete the measures was: Level of Satisfaction 1.15 minutes (SD = 0.41), ESAS 2.55 minutes (SD = 1.04), and KDQOL 9.56 minutes (SD = 2.03); the mean time to complete all three instruments was 13.19 minutes (SD = 2.42). There were no significant correlations between level of satisfaction and age, gender, HRQOL, time taken to complete surveys, computer experience, or comfort with technology. Comfort with technology and computer experience were highly correlated, r = .7, p (one-tailed) < 0.01. Limitations include lack of generalizability because of a small self-selected sample of relatively healthy patients and a lack of psychometric testing on the measure of satisfaction. Participants were satisfied with the platform and the time involved for completion of instruments was modest. Routine use of HRQOL measures for clinical purposes may be facilitated through use of tablet computers.

  18. Computational AeroAcoustics for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Ed; Hixon, Ray; Dyson, Rodger; Huff, Dennis (Technical Monitor)

    2002-01-01

    An overview of the current state-of-the-art in computational aeroacoustics as applied to fan noise prediction at NASA Glenn is presented. Results from recent modeling efforts using three dimensional inviscid formulations in both frequency and time domains are summarized. In particular, the application of a frequency domain method, called LINFLUX, to the computation of rotor-stator interaction tone noise is reviewed and the influence of the background inviscid flow on the acoustic results is analyzed. It has been shown that the noise levels are very sensitive to the gradients of the mean flow near the surface and that the correct computation of these gradients for highly loaded airfoils is especially problematic using an inviscid formulation. The ongoing development of a finite difference time marching code that is based on a sixth order compact scheme is also reviewed. Preliminary results from the nonlinear computation of a gust-airfoil interaction model problem demonstrate the fidelity and accuracy of this approach. Spatial and temporal features of the code as well as its multi-block nature are discussed. Finally, latest results from an ongoing effort in the area of arbitrarily high order methods are reviewed and technical challenges associated with implementing correct high order boundary conditions are discussed and possible strategies for addressing these challenges ore outlined.

  19. Sketchcode: A Documentation Technique for Computer Hobbyists and Programmers

    ERIC Educational Resources Information Center

    Voros, Todd, L.

    1978-01-01

    Sketchcode is a metaprograming pseudo-language documentation technique intended to simplify the process of program writing and debugging for both high and low-level users. Helpful hints and examples for the use of the technique are included. (CMV)

  20. Microcomputer Simulation of Real Gases--Part 1.

    ERIC Educational Resources Information Center

    Sperandeo-Mineo, R. M.; Tripi, G.

    1987-01-01

    Describes some simple computer programs designed to simulate the molecular dynamics of two-dimensional systems with a Lennard-Jones interaction potential. Discusses the use of the software in introductory physics courses at the high school and college level. (TW)

  1. A Graduate Professional Program in Translation.

    ERIC Educational Resources Information Center

    Waldinger, Renee

    1987-01-01

    The City University of New York Graduate School's professional program in translation combines high-level, specialized language learning in French, German, and Spanish with related graduate work in such disciplines as international affairs, finance, banking, jurisprudence, literature, and computer science. (CB)

  2. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Allcock, William; Beggio, Chris

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less

  3. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Mcewan, S. D.; Spry, A. J.

    1985-01-01

    Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.

  4. Integrating computational methods to retrofit enzymes to synthetic pathways.

    PubMed

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  5. [Feasibility and acceptance of computer-based assessment for the identification of psychosocially distressed patients in routine clinical care].

    PubMed

    Sehlen, Susanne; Ott, Martin; Marten-Mittag, Birgitt; Haimerl, Wolfgang; Dinkel, Andreas; Duehmke, Eckhart; Klein, Christian; Schaefer, Christof; Herschbach, Peter

    2012-07-01

    This study investigated feasibility and acceptance of computer-based assessment for the identification of psychosocial distress in routine radiotherapy care. 155 cancer patients were assessed using QSC-R10, PO-Bado-SF and Mach-9. The congruence between computerized tablet PC and conventional paper assessment was analysed in 50 patients. The agreement between the 2 modes was high (ICC 0.869-0.980). Acceptance of computer-based assessment was very high (>95%). Sex, age, education, distress and Karnofsky performance status (KPS) did not influence acceptance. Computerized assessment was rated more difficult by older patients (p = 0.039) and patients with low KPS (p = 0.020). 75.5% of the respondents supported referral for psycho-social intervention for distressed patients. The prevalence of distress was 27.1% (QSC-R10). Computer-based assessment allows easy identification of distressed patients. Level of staff involvement is low, and the results are quickly available for care providers. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Air, Ocean and Climate Monitoring Enhancing Undergraduate Training in the Physical, Environmental and Computer Sciences

    NASA Technical Reports Server (NTRS)

    Hope, W. W.; Johnson, L. P.; Obl, W.; Stewart, A.; Harris, W. C.; Craig, R. D.

    2000-01-01

    Faculty in the Department of Physical, Environmental and Computer Sciences strongly believe in the concept that undergraduate research and research-related activities must be integrated into the fabric of our undergraduate Science and Technology curricula. High level skills, such as problem solving, reasoning, collaboration and the ability to engage in research, are learned for advanced study in graduate school or for competing for well paying positions in the scientific community. One goal of our academic programs is to have a pipeline of research activities from high school to four year college, to graduate school, based on the GISS Institute on Climate and Planets model.

  7. On finite element implementation and computational techniques for constitutive modeling of high temperature composites

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.

    1989-01-01

    The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.

  8. Effects of sparse sampling in combination with iterative reconstruction on quantitative bone microstructure assessment

    NASA Astrophysics Data System (ADS)

    Mei, Kai; Kopp, Felix K.; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Kirschke, Jan S.; Noël, Peter B.; Baum, Thomas

    2017-03-01

    The trabecular bone microstructure is a key to the early diagnosis and advanced therapy monitoring of osteoporosis. Regularly measuring bone microstructure with conventional multi-detector computer tomography (MDCT) would expose patients with a relatively high radiation dose. One possible solution to reduce exposure to patients is sampling fewer projection angles. This approach can be supported by advanced reconstruction algorithms, with their ability to achieve better image quality under reduced projection angles or high levels of noise. In this work, we investigated the performance of iterative reconstruction from sparse sampled projection data on trabecular bone microstructure in in-vivo MDCT scans of human spines. The computed MDCT images were evaluated by calculating bone microstructure parameters. We demonstrated that bone microstructure parameters were still computationally distinguishable when half or less of the radiation dose was employed.

  9. Diels–Alder Exo-Selectivity in Terminal-Substituted Dienes and Dienophiles: Experimental Discoveries and Computational Explanations

    PubMed Central

    Lam, Yu-hong; Cheong, Paul Ha-Yeon; Mata, José M. Blasco; Stanway, Steven J.; Gouverneur, Véronique; Houk, K. N.

    2009-01-01

    The Diels–Alder reactions of a series of silyloxydienes and silylated dienes with acyclic α,β-unsaturated ketones and N-acyloxazolidinones have been investigated. The endo/exo stereochemical outcome is strongly influenced by the substitution pattern of the reactants. High exo selectivity was observed when the termini of the diene and the dienophile involved in the shorter forming bond are both substituted, while the normal endo preference was found otherwise. The exo-selective asymmetric Diels–Alder reactions using Evans’ oxazolidinone chiral auxiliary furnished a high level of π-facial selectivity in the same sense as their well-documented endo-selective counterparts. Computational results of these Diels–Alder reactions were consistent with the experimental endo/exo selectivity in most cases. A twist-asynchronous model accounts for the geometries and energies of the computed transition structures. PMID:19154113

  10. Biclustering Protein Complex Interactions with a Biclique FindingAlgorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Chris; Zhang, Anne Ya; Holbrook, Stephen

    2006-12-01

    Biclustering has many applications in text mining, web clickstream mining, and bioinformatics. When data entries are binary, the tightest biclusters become bicliques. We propose a flexible and highly efficient algorithm to compute bicliques. We first generalize the Motzkin-Straus formalism for computing the maximal clique from L{sub 1} constraint to L{sub p} constraint, which enables us to provide a generalized Motzkin-Straus formalism for computing maximal-edge bicliques. By adjusting parameters, the algorithm can favor biclusters with more rows less columns, or vice verse, thus increasing the flexibility of the targeted biclusters. We then propose an algorithm to solve the generalized Motzkin-Straus optimizationmore » problem. The algorithm is provably convergent and has a computational complexity of O(|E|) where |E| is the number of edges. It relies on a matrix vector multiplication and runs efficiently on most current computer architectures. Using this algorithm, we bicluster the yeast protein complex interaction network. We find that biclustering protein complexes at the protein level does not clearly reflect the functional linkage among protein complexes in many cases, while biclustering at protein domain level can reveal many underlying linkages. We show several new biologically significant results.« less

  11. Assessing Changes in High School Students' Conceptual Understanding through Concept Maps before and after the Computer-Based Predict-Observe-Explain (CB-POE) Tasks on Acid-Base Chemistry at the Secondary Level

    ERIC Educational Resources Information Center

    Yaman, Fatma; Ayas, Alipasa

    2015-01-01

    Although concept maps have been used as alternative assessment methods in education, there has been an ongoing debate on how to evaluate students' concept maps. This study discusses how to evaluate students' concept maps as an assessment tool before and after 15 computer-based Predict-Observe-Explain (CB-POE) tasks related to acid-base chemistry.…

  12. Discovering the intelligence in molecular biology.

    PubMed

    Uberbacher, E

    1995-12-01

    The Third International Conference on Intelligent Systems in Molecular Biology was truly an outstanding event. Computational methods in molecular biology have reached a new level of maturity and utility, resulting in many high-impact applications. The success of this meeting bodes well for the rapid and continuing development of computational methods, intelligent systems and information-based approaches for the biosciences. The basic technology, originally most often applied to 'feasibility' problems, is now dealing effectively with the most difficult real-world problems. Significant progress has been made in understanding protein-structure information, structural classification, and how functional information and the relevant features of active-site geometry can be gleaned from structures by automated computational approaches. The value and limits of homology-based methods, and the ability to classify proteins by structure in the absence of homology, have reached a new level of sophistication. New methods for covariation analysis in the folding of large structures such as RNAs have shown remarkably good results, indicating the long-term potential to understand very complicated molecules and multimolecular complexes using computational means. Novel methods, such as HMMs, context-free grammars and the uses of mutual information theory, have taken center stage as highly valuable tools in our quest to represent and characterize biological information. A focus on creative uses of intelligent systems technologies and the trend toward biological application will undoubtedly continue and grow at the 1996 ISMB meeting in St Louis.

  13. Development of the Tensoral Computer Language

    NASA Technical Reports Server (NTRS)

    Ferziger, Joel; Dresselhaus, Eliot

    1996-01-01

    The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laros III, James H.; DeBonis, David; Grant, Ryan

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover themore » entire software space, from generic hardware interfaces to the input from the computer facility manager.« less

  15. Digital multicolor printing: state of the art and future challenges

    NASA Astrophysics Data System (ADS)

    Kipphan, Helmut

    1995-04-01

    During the last 5 years, digital techniques have become extremely important in the graphic arts industry. All sections in the production flow for producing multicolor printed products - prepress, printing and postpress - are influenced by digitalization, in an evolutionary and revolutionary way. New equipment and network techniques bring all the sections closer together. The focus is put on high-quality multicolor printing, together with high productivity. Conventional offset printing technology is compared with the leading nonimpact printing technologies. Computer to press is contrasted with computer to print techniques. The newest available digital multicolor presses are described - the direct imaging offset printing press from HEIDELBERG with new laser imaging technique as well as the INDIGO and XEIKON presses based on electrophotography. Regarding technical specifications, economic calculations and print quality, it is worked out that each technique has its own market segments. An outlook is given for future computer to press techniques and the potential of nonimpact printing technologies for advanced high-speed multicolor computer to print equipment. Synergy effects from the NIP-technologies to the conventional printing technologies and vice versa are possible for building up innovative new products, for example hybrid printing systems. It is also shown that there is potential for improving the print quality, based on special screening algorithms, and a higher number of grey levels per pixel by using NIP-technologies. As an intermediate step in digitalization of the production flow, but also as an economical solution computer to plate equipment is described. By producing printed products totally in a digital way, digital color proofing as well as color management systems are needed. The newest high-tech equipment using NIP-technologies for producing proofs is explained. All in all it is shown that the state of the art in digital multicolor printing has reached a very high level in technology, productivity and quality, but that there is still space for improvements and innovations. Manufacturers of equipment and producers of printed products can take part in a successful evolution-changes, chances and challenges must be recognized and considered for future orientated activities and investments.

  16. High-resolution method for evolving complex interface networks

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  17. New Possibilities of Substance Identification Based on THz Time Domain Spectroscopy Using a Cascade Mechanism of High Energy Level Excitation

    PubMed Central

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Zakharova, Irina G.; Zagursky, Dmitry Yu.

    2017-01-01

    Using an experiment with thin paper layers and computer simulation, we demonstrate the principal limitations of standard Time Domain Spectroscopy (TDS) based on using a broadband THz pulse for the detection and identification of a substance placed inside a disordered structure. We demonstrate the spectrum broadening of both transmitted and reflected pulses due to the cascade mechanism of the high energy level excitation considering, for example, a three-energy level medium. The pulse spectrum in the range of high frequencies remains undisturbed in the presence of a disordered structure. To avoid false absorption frequencies detection, we apply the spectral dynamics analysis method (SDA-method) together with certain integral correlation criteria (ICC). PMID:29186849

  18. Review On Applications Of Neural Network To Computer Vision

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  19. (Re)engineering Earth System Models to Expose Greater Concurrency for Ultrascale Computing: Practice, Experience, and Musings

    NASA Astrophysics Data System (ADS)

    Mills, R. T.

    2014-12-01

    As the high performance computing (HPC) community pushes towards the exascale horizon, the importance and prevalence of fine-grained parallelism in new computer architectures is increasing. This is perhaps most apparent in the proliferation of so-called "accelerators" such as the Intel Xeon Phi or NVIDIA GPGPUs, but the trend also holds for CPUs, where serial performance has grown slowly and effective use of hardware threads and vector units are becoming increasingly important to realizing high performance. This has significant implications for weather, climate, and Earth system modeling codes, many of which display impressive scalability across MPI ranks but take relatively little advantage of threading and vector processing. In addition to increasing parallelism, next generation codes will also need to address increasingly deep hierarchies for data movement: NUMA/cache levels, on node vs. off node, local vs. wide neighborhoods on the interconnect, and even in the I/O system. We will discuss some approaches (grounded in experiences with the Intel Xeon Phi architecture) for restructuring Earth science codes to maximize concurrency across multiple levels (vectors, threads, MPI ranks), and also discuss some novel approaches for minimizing expensive data movement/communication.

  20. Nonlinear histogram binning for quantitative analysis of lung tissue fibrosis in high-resolution CT data

    NASA Astrophysics Data System (ADS)

    Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.

    2007-03-01

    Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.

Top