Sample records for achieve high computational

  1. Does Recreational Computer Use Affect High School Achievement?

    ERIC Educational Resources Information Center

    Bowers, Alex J.; Berland, Matthew

    2013-01-01

    Historically, the relationship between student academic achievement and use of computers for fun and video gaming has been described from a multitude of perspectives, from positive, to negative, to neutral. However, recent research has indicated that computer use and video gaming may be positively associated with achievement, yet these studies…

  2. One-to-One Computing and Student Achievement in Ohio High Schools

    ERIC Educational Resources Information Center

    Williams, Nancy L.; Larwin, Karen H.

    2016-01-01

    This study explores the impact of one-to-one computing on student achievement in Ohio high schools as measured by performance on the Ohio Graduation Test (OGT). The sample included 24 treatment schools that were individually paired with a similar control school. An interrupted time series methodology was deployed to examine OGT data over a period…

  3. Computer simulations in the high school: students' cognitive stages, science process skills and academic achievement in microbiology

    NASA Astrophysics Data System (ADS)

    Huppert, J.; Michal Lomask, S.; Lazarowitz, R.

    2002-08-01

    Computer-assisted learning, including simulated experiments, has great potential to address the problem solving process which is a complex activity. It requires a highly structured approach in order to understand the use of simulations as an instructional device. This study is based on a computer simulation program, 'The Growth Curve of Microorganisms', which required tenth grade biology students to use problem solving skills whilst simultaneously manipulating three independent variables in one simulated experiment. The aims were to investigate the computer simulation's impact on students' academic achievement and on their mastery of science process skills in relation to their cognitive stages. The results indicate that the concrete and transition operational students in the experimental group achieved significantly higher academic achievement than their counterparts in the control group. The higher the cognitive operational stage, the higher students' achievement was, except in the control group where students in the concrete and transition operational stages did not differ. Girls achieved equally with the boys in the experimental group. Students' academic achievement may indicate the potential impact a computer simulation program can have, enabling students with low reasoning abilities to cope successfully with learning concepts and principles in science which require high cognitive skills.

  4. Comparing Computer Game and Traditional Lecture Using Experience Ratings from High and Low Achieving Students

    ERIC Educational Resources Information Center

    Grimley, Michael; Green, Richard; Nilsen, Trond; Thompson, David

    2012-01-01

    Computer games are purported to be effective instructional tools that enhance motivation and improve engagement. The aim of this study was to investigate how tertiary student experiences change when instruction was computer game based compared to lecture based, and whether experiences differed between high and low achieving students. Participants…

  5. Achieving High Performance with FPGA-Based Computing

    PubMed Central

    Herbordt, Martin C.; VanCourt, Tom; Gu, Yongfeng; Sukhwani, Bharat; Conti, Al; Model, Josh; DiSabello, Doug

    2011-01-01

    Numerous application areas, including bioinformatics and computational biology, demand increasing amounts of processing capability. In many cases, the computation cores and data types are suited to field-programmable gate arrays. The challenge is identifying the design techniques that can extract high performance potential from the FPGA fabric. PMID:21603088

  6. Using Computer Animation and Illustration Activities to Improve High School Students' Achievement in Molecular Genetics

    ERIC Educational Resources Information Center

    Marbach-Ad, Gili; Rotbain, Yosi; Stavy, Ruth

    2008-01-01

    Our main goal in this study was to determine whether the use of computer animation and illustration activities in high school can contribute to student achievement in molecular genetics. Three comparable groups of eleventh- and twelfth-grade students participated: the control group (116 students) was taught in the traditional lecture format,…

  7. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we willmore » instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.« less

  8. Recent achievements in real-time computational seismology in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, S.; Liang, W.; Huang, B.

    2012-12-01

    Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information < 120 sec; ROS completes a 3D simulation < 3 minutes). All of these computational results are posted on the internet in real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).

  9. The Impact of Socioeconomic Status on Achievement of High School Students Participating in a One-to-One Laptop Computer Program

    ERIC Educational Resources Information Center

    Weers, Anthony J.

    2012-01-01

    The purpose of this study was to determine the impact of socioeconomic status on the achievement of high school students participating in a one-to-one laptop computer program. Students living in poverty struggle to achieve in schools across the country, educators must address this issue. The independent variable in this study is socioeconomic…

  10. Student Achievement in Computer Programming: Lecture vs Computer-Aided Instruction

    ERIC Educational Resources Information Center

    Tsai, San-Yun W.; Pohl, Norval F.

    1978-01-01

    This paper discusses a study of the differences in student learning achievement, as measured by four different types of common performance evaluation techniques, in a college-level computer programming course under three teaching/learning environments: lecture, computer-aided instruction, and lecture supplemented with computer-aided instruction.…

  11. Computer Assisted Project-Based Instruction: The Effects on Science Achievement, Computer Achievement and Portfolio Assessment

    ERIC Educational Resources Information Center

    Erdogan, Yavuz; Dede, Dinçer

    2015-01-01

    The purpose of this study is to compare the effects of computer assisted project-based instruction on learners' achievement in a science and technology course, in a computer course and in portfolio development. With this aim in mind, a quasi-experimental design was used and a sample of 70 seventh grade secondary school students from Org. Esref…

  12. Computation Error Analysis: Students with Mathematics Difficulty Compared to Typically Achieving Students

    ERIC Educational Resources Information Center

    Nelson, Gena; Powell, Sarah R.

    2018-01-01

    Though proficiency with computation is highly emphasized in national mathematics standards, students with mathematics difficulty (MD) continue to struggle with computation. To learn more about the differences in computation error patterns between typically achieving students and students with MD, we assessed 478 third-grade students on a measure…

  13. Paper-Based and Computer-Based Concept Mappings: The Effects on Computer Achievement, Computer Anxiety and Computer Attitude

    ERIC Educational Resources Information Center

    Erdogan, Yavuz

    2009-01-01

    The purpose of this paper is to compare the effects of paper-based and computer-based concept mappings on computer hardware achievement, computer anxiety and computer attitude of the eight grade secondary school students. The students were randomly allocated to three groups and were given instruction on computer hardware. The teaching methods used…

  14. The Effects of Computer Animated Dissection versus Preserved Animal Dissection on the Student Achievement in a High School Biology Class.

    ERIC Educational Resources Information Center

    Kariuki, Patrick; Paulson, Ronda

    The purpose of this study was to examine the effectiveness of computer-animated dissection techniques versus the effectiveness of traditional dissection techniques as related to student achievement. The sample used was 104 general biology students from a small, rural high school in Northeast Tennessee. Random selection was used to separate the…

  15. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  16. What Does Quality Programming Mean for High Achieving Students?

    ERIC Educational Resources Information Center

    Samudzi, Cleo

    2008-01-01

    The Missouri Academy of Science, Mathematics and Computing (Missouri Academy) is a two-year accelerated, early-entrance-to-college, residential school that matches the level, complexity and pace of the curriculum with the readiness and motivation of high achieving high school students. The school is a part of Northwest Missouri State University…

  17. A computer-based measure of resultant achievement motivation.

    PubMed

    Blankenship, V

    1987-08-01

    Three experiments were conducted to develop a computer-based measure of individual differences in resultant achievement motivation (RAM) on the basis of level-of-aspiration, achievement motivation, and dynamics-of-action theories. In Experiment 1, the number of atypical shifts and greater responsiveness to incentives on 21 trials with choices among easy, intermediate, and difficult levels of an achievement-oriented game were positively correlated and were found to differentiate the 62 subjects (31 men, 31 women) on the amount of time they spent at a nonachievement task (watching a color design) 1 week later. In Experiment 2, test-retest reliability was established with the use of 67 subjects (15 men, 52 women). Point and no-point trials were offered in blocks, with point trials first for half the subjects and no-point trials first for the other half. Reliability was higher for the atypical-shift measure than for the incentive-responsiveness measure and was higher when points were offered first. In Experiment 3, computer anxiety was manipulated by creating a simulated computer breakdown in the experimental condition. Fifty-nine subjects (13 men, 46 women) were randomly assigned to the experimental condition or to one of two control conditions (an interruption condition and a no-interruption condition). Subjects with low RAM, as demonstrated by a low number of typical shifts, took longer to choose the achievement-oriented task, as predicted by the dynamics-of-action theory. The difference was evident in all conditions and most striking in the computer-breakdown condition. A change of focus from atypical to typical shifts is discussed.

  18. Computer Games for the Math Achievement of Diverse Students

    ERIC Educational Resources Information Center

    Kim, Sunha; Chang, Mido

    2010-01-01

    Although computer games as a way to improve students' learning have received attention by many educational researchers, no consensus has been reached on the effects of computer games on student achievement. Moreover, there is lack of empirical research on differential effects of computer games on diverse learners. In response, this study…

  19. Position Paper: Applying Machine Learning to Software Analysis to Achieve Trusted, Repeatable Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prowell, Stacy J; Symons, Christopher T

    2015-01-01

    Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.

  20. Technologies for Achieving Field Ubiquitous Computing

    NASA Astrophysics Data System (ADS)

    Nagashima, Akira

    Although the term “ubiquitous” may sound like jargon used in information appliances, ubiquitous computing is an emerging concept in industrial automation. This paper presents the author's visions of field ubiquitous computing, which is based on the novel Internet Protocol IPv6. IPv6-based instrumentation will realize the next generation manufacturing excellence. This paper focuses on the following five key issues: 1. IPv6 standardization; 2. IPv6 interfaces embedded in field devices; 3. Compatibility with FOUNDATION fieldbus; 4. Network securities for field applications; and 5. Wireless technologies to complement IP instrumentation. Furthermore, the principles of digital plant operations and ubiquitous production to support the above key technologies to achieve field ubiquitous systems are discussed.

  1. Longitudinal study of low and high achievers in early mathematics.

    PubMed

    Navarro, Jose I; Aguilar, Manuel; Marchena, Esperanza; Ruiz, Gonzalo; Menacho, Inmaculada; Van Luit, Johannes E H

    2012-03-01

    Longitudinal studies allow us to identify, which specific maths skills are weak in young children, and whether there is a continuing weakness in these areas throughout their school years. This 2-year study investigated whether certain socio-demographic variables affect early mathematical competency in children aged 5-7 years. A randomly selected sample of 127 students (64 female; 63 male) participated. At the start of the study, the students were approximately 5 years old (M= 5.2; SD= 0.28; range = 4.5-5.8). The students were assessed using the Early Numeracy Test and then allocated to a high (n= 26), middle (n= 76), or low (n= 25) achievers group. The same children were assessed again with the Early Numeracy Test at 6 and 7 years old, respectively. Eight socio-demographic characteristics were also evaluated: family model, education of the parent(s), job of the parent(s), number of family members, birth order, number of computers at home, frequency of teacher visits, and hours watching television. Early Numeracy Test scores were more consistent for the high-achievers group than for the low-achievers group. Approximately 5.5% of low achievers obtained low scores throughout the study. A link between specific socio-demographic characteristics and early achievement in mathematics was only found for number of computers at home. The level of mathematical ability among students aged 5-7 years remains relatively stable regardless of the initial level of achievement. However, early screening for mathematics learning disabilities could be useful in helping low-achieving students overcome learning obstacles. ©2011 The British Psychological Society.

  2. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    NASA Astrophysics Data System (ADS)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  3. The Effect of Computer Games on Students' Critical Thinking Disposition and Educational Achievement

    ERIC Educational Resources Information Center

    Seifi, Mohammad; Derikvandi, Zahra; Moosavipour, Saeed; Khodabandelou, Rouhollah

    2015-01-01

    The main aim of this research was to investigate the effect of computer games on student' critical thinking disposition and educational achievement. The research method was descriptive, and its type was casual-comparative. The sample included 270 female high school students in Andimeshk town selected by multistage cluster method. Ricketts…

  4. Achieving High Performance on the i860 Microprocessor

    NASA Technical Reports Server (NTRS)

    Lee, King; Kutler, Paul (Technical Monitor)

    1998-01-01

    The i860 is a high performance microprocessor used in the Intel Touchstone project. This paper proposes a paradigm for programming the i860 that is modelled on the vector instructions of the Cray computers. Fortran callable assembler subroutines were written that mimic the concurrent vector instructions of the Cray. Cache takes the place of vector registers. Using this paradigm we have achieved twice the performance of compiled code on a traditional solve.

  5. Effects of Computer Course on Computer Self-Efficacy, Computer Attitudes and Achievements of Young Individuals in Siirt, Turkey

    ERIC Educational Resources Information Center

    Çelik, Halil Coskun

    2015-01-01

    The purpose of this study is to investigate the effects of computer courses on young individuals' computer self-efficacy, attitudes and achievement. The study group of this research included 60 unemployed young individuals (18-25 ages) in total; 30 in the experimental group and 30 in the control group. An experimental research model with pretest…

  6. Accelerated Mathematics and High-Ability Students' Math Achievement in Grades Three and Four

    ERIC Educational Resources Information Center

    Stanley, Ashley M.

    2011-01-01

    The purpose of this study was to explore the relationship between the use of a computer-managed integrated learning system entitled Accelerated Math (AM) as a supplement to traditional mathematics instruction on achievement as measured by TerraNova achievement tests of third and fourth grade high-ability students. Gender, socioeconomic status, and…

  7. Achievements and Challenges in Computational Protein Design.

    PubMed

    Samish, Ilan

    2017-01-01

    Computational protein design (CPD), a yet evolving field, includes computer-aided engineering for partial or full de novo designs of proteins of interest. Designs are defined by a requested structure, function, or working environment. This chapter describes the birth and maturation of the field by presenting 101 CPD examples in a chronological order emphasizing achievements and pending challenges. Integrating these aspects presents the plethora of CPD approaches with the hope of providing a "CPD 101". These reflect on the broader structural bioinformatics and computational biophysics field and include: (1) integration of knowledge-based and energy-based methods, (2) hierarchical designated approach towards local, regional, and global motifs and the integration of high- and low-resolution design schemes that fit each such region, (3) systematic differential approaches towards different protein regions, (4) identification of key hot-spot residues and the relative effect of remote regions, (5) assessment of shape-complementarity, electrostatics and solvation effects, (6) integration of thermal plasticity and functional dynamics, (7) negative design, (8) systematic integration of experimental approaches, (9) objective cross-assessment of methods, and (10) successful ranking of potential designs. Future challenges also include dissemination of CPD software to the general use of life-sciences researchers and the emphasis of success within an in vivo milieu. CPD increases our understanding of protein structure and function and the relationships between the two along with the application of such know-how for the benefit of mankind. Applied aspects range from biological drugs, via healthier and tastier food products to nanotechnology and environmentally friendly enzymes replacing toxic chemicals utilized in the industry.

  8. The Relationship between Computer Games and Reading Achievement

    ERIC Educational Resources Information Center

    Reed, Tammy Dotson

    2010-01-01

    Illiteracy rates are increasing. The negative social and economic effects caused by weak reading skills include political unrest, social and health service inequality, poverty, and employment challenges. This quantitative study explored the proposition that the use of computer software games would increase reading achievement in second grade…

  9. The Effect of the Computer Assisted Teaching and 7e Model of the Constructivist Learning Methods on the Achievements and Attitudes of High School Students

    ERIC Educational Resources Information Center

    Gönen, Selahattin; Kocakaya, Serhat; Inan, Cemil

    2006-01-01

    This study provides a comparative effect study of the Computer Assisted Teaching and the 7E model of the Constructivist Learning methods on attitudes and achievements of the students in physics classes. The experiments have been carried out in a private high school in Diyarbakir/Turkey on groups of first year students whose pre-test scores of…

  10. Spatial Experiences of High Academic Achievers: Insights from a Developmental Perspective

    ERIC Educational Resources Information Center

    Weckbacher, Lisa Marie; Okamoto, Yukari

    2012-01-01

    The study explored the relationship between types of spatial experiences and spatial abilities among 13- to 14-year-old high academic achievers. Each participant completed two spatial tasks and a survey assessing favored spatial activities across five categories (computers, toys, sports, music, and art) and three developmental periods (early…

  11. Students' Mathematics Word Problem-Solving Achievement in a Computer-Based Story

    ERIC Educational Resources Information Center

    Gunbas, N.

    2015-01-01

    The purpose of this study was to investigate the effect of a computer-based story, which was designed in anchored instruction framework, on sixth-grade students' mathematics word problem-solving achievement. Problems were embedded in a story presented on a computer as computer story, and then compared with the paper-based version of the same story…

  12. The Effects of Computer Games on the Achievement of Basic Mathematical Skills

    ERIC Educational Resources Information Center

    Sayan, Hamiyet

    2015-01-01

    This study aims to analyze the relationship between playing computer games and learning basic mathematics skills. It shows the role computer games play in the learning and achievement of basic mathematical skills by students. Nowadays it is clear that individuals, especially young persons are very fond of computer and computer games. Since…

  13. Comparing the Reading Performance of High-Achieving Adolescents: Computer-Based Testing versus Paper/Pencil

    ERIC Educational Resources Information Center

    Eno, Linda Peet

    2011-01-01

    Literacy is moving into the digital context. Many of the literacy tasks associated with higher education, the workplace, and civic life now take place in the digital world. Literacy in high school, however, languishes in the text world. This study compared the text literacy of a group of high-achieving 10th-grade students, to their digital…

  14. The Effects of Modern Mathematics Computer Games on Mathematics Achievement and Class Motivation

    ERIC Educational Resources Information Center

    Kebritchi, Mansureh; Hirumi, Atsusi; Bai, Haiyan

    2010-01-01

    This study examined the effects of a computer game on students' mathematics achievement and motivation, and the role of prior mathematics knowledge, computer skill, and English language skill on their achievement and motivation as they played the game. A total of 193 students and 10 teachers participated in this study. The teachers were randomly…

  15. High Achievers: 23rd Annual Survey. Attitudes and Opinions from the Nation's High Achieving Teens.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Northbrook, IL.

    This report presents data from an annual survey of high school student leaders and high achievers. It is noted that of the nearly 700,000 high achievers featured in this edition, 5,000 students were sent the survey and 2,092 questionnaires were completed. Subjects were high school juniors and seniors selected for recognition by their principals or…

  16. A History of High-Performance Computing

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.

  17. High-End Computing Challenges in Aerospace Design and Engineering

    NASA Technical Reports Server (NTRS)

    Bailey, F. Ronald

    2004-01-01

    High-End Computing (HEC) has had significant impact on aerospace design and engineering and is poised to make even more in the future. In this paper we describe four aerospace design and engineering challenges: Digital Flight, Launch Simulation, Rocket Fuel System and Digital Astronaut. The paper discusses modeling capabilities needed for each challenge and presents projections of future near and far-term HEC computing requirements. NASA's HEC Project Columbia is described and programming strategies presented that are necessary to achieve high real performance.

  18. Reasoning Abilities in Primary School: A Pilot Study on Poor Achievers vs. Normal Achievers in Computer Game Tasks

    ERIC Educational Resources Information Center

    Dagnino, Francesca Maria; Ballauri, Margherita; Benigno, Vincenza; Caponetto, Ilaria; Pesenti, Elia

    2013-01-01

    This paper presents the results of preliminary research on the assessment of reasoning abilities in primary school poor achievers vs. normal achievers using computer game tasks. Subjects were evaluated by means of cognitive assessment on logical abilities and academic skills. The aim of this study is to better understand the relationship between…

  19. The Computer Book of the Internal Medicine Resident: competence acquisition and achievement of learning objectives.

    PubMed

    Oristrell, J; Oliva, J C; Casanovas, A; Comet, R; Jordana, R; Navarro, M

    2014-01-01

    The Computer Book of the Internal Medicine resident (CBIMR) is a computer program that was validated to analyze the acquisition of competences in teams of Internal Medicine residents. To analyze the characteristics of the rotations during the Internal Medicine residency and to identify the variables associated with the acquisition of clinical and communication skills, the achievement of learning objectives and resident satisfaction. All residents of our service (n=20) participated in the study during a period of 40 months. The CBIMR consisted of 22 self-assessment questionnaires specific for each rotation, with items on services (clinical workload, disease protocolization, resident responsibilities, learning environment, service organization and teamwork) and items on educational outcomes (acquisition of clinical and communication skills, achievement of learning objectives, overall satisfaction). Associations between services features and learning outcomes were analyzed using bivariate and multivariate analysis. An intense clinical workload, high resident responsibilities and disease protocolization were associated with the acquisition of clinical skills. High clinical competence and teamwork were both associated with better communication skills. Finally, an adequate learning environment was associated with increased clinical competence, the achievement of educational goals and resident satisfaction. Potentially modifiable variables related with the operation of clinical services had a significant impact on the acquisition of clinical and communication skills, the achievement of educational goals, and resident satisfaction during the specialized training in Internal Medicine. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  20. Elementary School Computer Access, Socioeconomic Status, Ethnicity, and Grade 5 Student Achievement

    ERIC Educational Resources Information Center

    Barrett, Julie Ann

    2013-01-01

    Purpose: The purpose of this study was to describe the current school computer access rates of elementary school students and to determine the extent to which school computer access relates to academic achievement among Grade 5 students in the state of Texas. Specifically, the relationship of school computer access to student passing rates on the…

  1. Achievements and challenges in structural bioinformatics and computational biophysics.

    PubMed

    Samish, Ilan; Bourne, Philip E; Najmanovich, Rafael J

    2015-01-01

    The field of structural bioinformatics and computational biophysics has undergone a revolution in the last 10 years. Developments that are captured annually through the 3DSIG meeting, upon which this article reflects. An increase in the accessible data, computational resources and methodology has resulted in an increase in the size and resolution of studied systems and the complexity of the questions amenable to research. Concomitantly, the parameterization and efficiency of the methods have markedly improved along with their cross-validation with other computational and experimental results. The field exhibits an ever-increasing integration with biochemistry, biophysics and other disciplines. In this article, we discuss recent achievements along with current challenges within the field. © The Author 2014. Published by Oxford University Press.

  2. Achievements and challenges in structural bioinformatics and computational biophysics

    PubMed Central

    Samish, Ilan; Bourne, Philip E.; Najmanovich, Rafael J.

    2015-01-01

    Motivation: The field of structural bioinformatics and computational biophysics has undergone a revolution in the last 10 years. Developments that are captured annually through the 3DSIG meeting, upon which this article reflects. Results: An increase in the accessible data, computational resources and methodology has resulted in an increase in the size and resolution of studied systems and the complexity of the questions amenable to research. Concomitantly, the parameterization and efficiency of the methods have markedly improved along with their cross-validation with other computational and experimental results. Conclusion: The field exhibits an ever-increasing integration with biochemistry, biophysics and other disciplines. In this article, we discuss recent achievements along with current challenges within the field. Contact: Rafael.Najmanovich@USherbrooke.ca PMID:25488929

  3. Effectiveness of Computer-Assisted Mathematics Education (CAME) over Academic Achievement: A Meta-Analysis Study

    ERIC Educational Resources Information Center

    Demir, Seda; Basol, Gülsah

    2014-01-01

    The aim of the current study is to determine the overall effects of Computer-Assisted Mathematics Education (CAME) on academic achievement. After an extensive review of the literature, studies using Turkish samples and observing the effects of Computer-Assisted Education (CAE) on mathematics achievement were examined. As a result of this…

  4. Poor Results for High Achievers

    ERIC Educational Resources Information Center

    Bui, Sa; Imberman, Scott; Craig, Steven

    2012-01-01

    Three million students in the United States are classified as gifted, yet little is known about the effectiveness of traditional gifted and talented (G&T) programs. In theory, G&T programs might help high-achieving students because they group them with other high achievers and typically offer specially trained teachers and a more advanced…

  5. 16th Annual Survey of High Achievers: Attitudes and Opinions from the Nation's High Achieving Teens.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Northbrook, IL.

    The report presents data from 2,043 questionnaires completed by secondary student leaders and high achievers. Ss were selected for recognition in "Who's Who Among American High School Students" by their principals or guidance counselors, national youth organizations, or the publishing company because of high achievement in academics, activities,…

  6. Optical interconnection networks for high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  7. Scout: high-performance heterogeneous computing made simple

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less

  8. A Dataset from TIMSS to Examine the Relationship between Computer Use and Mathematics Achievement

    ERIC Educational Resources Information Center

    Kadijevich, Djordje M.

    2015-01-01

    Because the relationship between computer use and achievement is still puzzling, there is a need to prepare and analyze good quality datasets on computer use and achievement. Such a dataset can be derived from TIMSS data. This paper describes how this dataset can be prepared. It also gives an example of how the dataset may be analyzed. The…

  9. Mathematics Achievement in High- and Low-Achieving Secondary Schools

    ERIC Educational Resources Information Center

    Mohammadpour, Ebrahim; Shekarchizadeh, Ahmadreza

    2015-01-01

    This paper identifies the amount of variance in mathematics achievement in high- and low-achieving schools that can be explained by school-level factors, while controlling for student-level factors. The data were obtained from 2679 Iranian eighth graders who participated in the 2007 Trends in International Mathematics and Science Study. Of the…

  10. Computers and the Learning of Biological Concepts: Attitudes and Achievement of Nigerian Students.

    ERIC Educational Resources Information Center

    Jegede, Olugbemiro J.

    1991-01-01

    Compared attitudes toward computer use and achievement in biology for three groups of Nigerian students (n=64): (1) working alone with computer; (2) working in groups of three on the computer; (3) and a control group that received normal instruction (lecture). Students in the second group had the highest scores on attitude. No significant…

  11. A High Performance VLSI Computer Architecture For Computer Graphics

    NASA Astrophysics Data System (ADS)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  12. Ways of achieving continuous service from computers

    NASA Technical Reports Server (NTRS)

    Quinn, M. J., Jr.

    1974-01-01

    This paper outlines the methods used in the real-time computer complex to keep computers operating. Methods include selectover, high-speed restart, and low-speed restart. The hardware and software needed to implement these methods is discussed as well as the system recovery facility, alternate device support, and timeout. In general, methods developed while supporting the Gemini, Apollo, and Skylab space missions are presented.

  13. Unfulfilled Potential: High-Achieving Minority Students and the High School Achievement Gap in Math

    ERIC Educational Resources Information Center

    Kotok, Stephen

    2017-01-01

    This study uses multilevel modeling to examine a subset of the highest performing 9th graders and explores the extent that achievement gaps in math widen for high performing African American and Latino students and their high performing White and Asian peers during high school. Using nationally representative data from the High School Longitudinal…

  14. Computer Assisted Assignment of Students to Schools to Achieve Desegregation.

    ERIC Educational Resources Information Center

    Illinois Inst. of Tech., Chicago. Research Inst.

    To help school districts with the task of assigning students to schools in order to achieve desegregation, the Illinois Institute of Technology has developed a system involving the use of planning techniques and computer technology that greatly simplifies the school district's job. The key features of the system are objectivity, minimum…

  15. Counterstereotypic Identity among High-Achieving Black Students

    ERIC Educational Resources Information Center

    Harpalani, Vinay

    2017-01-01

    This article examines how racial stereotypes affect achievement and identity formation among low income, urban Black adolescents. Specifically, the major question addressed is: how do high-achieving Black students succeed academically despite negative stereotypes of their intellectual abilities? Results indicate that high-achieving Black youth,…

  16. Self Regulated Learning of High Achievers

    ERIC Educational Resources Information Center

    Rathod, Ami

    2010-01-01

    The study was conducted on high achievers of Senior Secondary school. Main objectives were to identify the self regulated learners among the high achievers, to find out dominant components and characteristics operative in self regulated learners and to compare self regulated learning of learners with respect to their subject (science and non…

  17. High-Performance Computing Systems and Operations | Computational Science |

    Science.gov Websites

    NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate

  18. Distration, Response Mode, Anxiety, and Achievement in Computer Assisted Instruction.

    ERIC Educational Resources Information Center

    Tobias, Sigmund

    The effects of distraction on achievement are particularly important in relation to the acceptability of computer-assisted instructional materials. In addition to these effects, various levels of anxiety may also be deleterious to the learner. In order to measure the effects of both distraction and anxiety 121 subjects were used in a two-by-two…

  19. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    NASA Astrophysics Data System (ADS)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  20. Achievement Motivations of the Students Studying at Computer and Instructional Technologies Teaching Department

    ERIC Educational Resources Information Center

    Semerci, Cetin; Duman, Burcu

    2013-01-01

    The aim of this research is to determine achievement motivations of the students studying at Computer and Instructional Technologies Teaching (CITT) Department. In this research, survey method is used. In the frame of this method, the existing situation about the achievement motivations of CITT students in Yuzuncu Yil and Firat Universities in…

  1. High-Performance Computing User Facility | Computational Science | NREL

    Science.gov Websites

    User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access

  2. The Impact of Internet Virtual Physics Laboratory Instruction on the Achievement in Physics, Science Process Skills and Computer Attitudes of 10th-Grade Students

    NASA Astrophysics Data System (ADS)

    Yang, Kun-Yuan; Heh, Jia-Sheng

    2007-10-01

    The purpose of this study was to investigate and compare the impact of Internet Virtual Physics Laboratory (IVPL) instruction with traditional laboratory instruction in physics academic achievement, performance of science process skills, and computer attitudes of tenth grade students. One-hundred and fifty students from four classes at one private senior high school in Taoyuan Country, Taiwan, R.O.C. were sampled. All four classes contained 75 students who were equally divided into an experimental group and a control group. The pre-test results indicated that the students' entry-level physics academic achievement, science process skills, and computer attitudes were equal for both groups. On the post-test, the experimental group achieved significantly higher mean scores in physics academic achievement and science process skills. There was no significant difference in computer attitudes between the groups. We concluded that the IVPL had potential to help tenth graders improve their physics academic achievement and science process skills.

  3. Motivation and Performance within a Collaborative Computer-Based Modeling Task: Relations between Students' Achievement Goal Orientation, Self-Efficacy, Cognitive Processing, and Achievement

    ERIC Educational Resources Information Center

    Sins, Patrick H. M.; van Joolingen, Wouter R.; Savelsbergh, Elwin R.; van Hout-Wolters, Bernadette

    2008-01-01

    Purpose of the present study was to test a conceptual model of relations among achievement goal orientation, self-efficacy, cognitive processing, and achievement of students working within a particular collaborative task context. The task involved a collaborative computer-based modeling task. In order to test the model, group measures of…

  4. High-speed linear optics quantum computing using active feed-forward.

    PubMed

    Prevedel, Robert; Walther, Philip; Tiefenbacher, Felix; Böhi, Pascal; Kaltenbaek, Rainer; Jennewein, Thomas; Zeilinger, Anton

    2007-01-04

    As information carriers in quantum computing, photonic qubits have the advantage of undergoing negligible decoherence. However, the absence of any significant photon-photon interaction is problematic for the realization of non-trivial two-qubit gates. One solution is to introduce an effective nonlinearity by measurements resulting in probabilistic gate operations. In one-way quantum computation, the random quantum measurement error can be overcome by applying a feed-forward technique, such that the future measurement basis depends on earlier measurement results. This technique is crucial for achieving deterministic quantum computation once a cluster state (the highly entangled multiparticle state on which one-way quantum computation is based) is prepared. Here we realize a concatenated scheme of measurement and active feed-forward in a one-way quantum computing experiment. We demonstrate that, for a perfect cluster state and no photon loss, our quantum computation scheme would operate with good fidelity and that our feed-forward components function with very high speed and low error for detected photons. With present technology, the individual computational step (in our case the individual feed-forward cycle) can be operated in less than 150 ns using electro-optical modulators. This is an important result for the future development of one-way quantum computers, whose large-scale implementation will depend on advances in the production and detection of the required highly entangled cluster states.

  5. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  6. Online High School Achievement versus Traditional High School Achievement

    ERIC Educational Resources Information Center

    Blohm, Katherine E.

    2017-01-01

    The following study examined the question of student achievement in online charter schools and how the achievement scores of students at online charter schools compare to achievement scores of students at traditional schools. Arizona has seen explosive growth in charter schools and online charter schools. A study comparing how these two types of…

  7. High fat diet promotes achievement of peak bone mass in young rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malvi, Parmanand; Piprode, Vikrant; Chaube, Balkrishna

    Highlights: • High fat diet helps in achieving peak bone mass at younger age. • Shifting from high fat to normal diet normalizes obese parameters. • Bone parameters are sustained even after withdrawal of high fat diet. - Abstract: The relationship between obesity and bone is complex. Epidemiological studies demonstrate positive as well as negative correlation between obesity and bone health. In the present study, we investigated the impact of high fat diet-induced obesity on peak bone mass. After 9 months of feeding young rats with high fat diet, we observed obesity phenotype in rats with increased body weight, fatmore » mass, serum triglycerides and cholesterol. There were significant increases in serum total alkaline phosphatase, bone mineral density and bone mineral content. By micro-computed tomography (μ-CT), we observed a trend of better trabecular bones with respect to their microarchitecture and geometry. This indicated that high fat diet helps in achieving peak bone mass and microstructure at younger age. We subsequently shifted rats from high fat diet to normal diet for 6 months and evaluated bone/obesity parameters. It was observed that after shifting rats from high fat diet to normal diet, fat mass, serum triglycerides and cholesterol were significantly decreased. Interestingly, the gain in bone mineral density, bone mineral content and trabecular bone parameters by HFD was retained even after body weight and obesity were normalized. These results suggest that fat rich diet during growth could accelerate achievement of peak bone mass that is sustainable even after withdrawal of high fat diet.« less

  8. Computer-Adaptive Testing: Implications for Students' Achievement, Motivation, Engagement, and Subjective Test Experience

    ERIC Educational Resources Information Center

    Martin, Andrew J.; Lazendic, Goran

    2018-01-01

    The present study investigated the implications of computer-adaptive testing (operationalized by way of multistage adaptive testing; MAT) and "conventional" fixed order computer testing for various test-relevant outcomes in numeracy, including achievement, test-relevant motivation and engagement, and subjective test experience. It did so…

  9. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  10. Effect of Varied Computer Based Presentation Sequences on Facilitating Student Achievement.

    ERIC Educational Resources Information Center

    Noonen, Ann; Dwyer, Francis M.

    1994-01-01

    Examines the effectiveness of visual illustrations in computer-based education, the effect of order of visual presentation, and whether screen design affects students' use of graphics and text. Results indicate that order of presentation and choice of review did not influence student achievement; however, when given a choice, students selected the…

  11. High-Productivity Computing in Computational Physics Education

    NASA Astrophysics Data System (ADS)

    Tel-Zur, Guy

    2011-03-01

    We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.

  12. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  13. The effects of computer-simulated experiments on high school biology students' problem-solving skills and achievement

    NASA Astrophysics Data System (ADS)

    Carmack, Gay Lynn Dickinson

    2000-10-01

    This two-part quasi-experimental repeated measures study examined whether computer simulated experiments have an effect on the problem solving skills of high school biology students in a school-within-a-school magnet program. Specifically, the study identified episodes in a simulation sequence where problem solving skills improved. In the Fall academic semester, experimental group students (n = 30) were exposed to two simulations: CaseIt! and EVOLVE!. Control group students participated in an internet research project and a paper Hardy-Weinberg activity. In the Spring academic semester, experimental group students were exposed to three simulations: Genetics Construction Kit, CaseIt! and EVOLVE! . Spring control group students participated in a Drosophila lab, an internet research project, and Advanced Placement lab 8. Results indicate that the Fall and Spring experimental groups experienced significant gains in scientific problem solving after the second simulation in the sequence. These gains were independent of the simulation sequence or the amount of time spent on the simulations. These gains were significantly greater than control group scores in the Fall. The Spring control group significantly outscored all other study groups on both pretest measures. Even so, the Spring experimental group problem solving performance caught up to the Spring control group performance after the third simulation. There were no significant differences between control and experimental groups on content achievement. Results indicate that CSE is as effective as traditional laboratories in promoting scientific problem solving and that CSE is a useful tool for improving students' scientific problem solving skills. Moreover, retention of problem solving skills is enhanced by utilizing more than one simulation.

  14. Effects of Computer-Assisted Jigsaw II Cooperative Learning Strategy on Physics Achievement and Retention

    ERIC Educational Resources Information Center

    Gambari, Isiaka Amosa; Yusuf, Mudasiru Olalere

    2016-01-01

    This study investigated the effects of computer-assisted Jigsaw II cooperative strategy on physics achievement and retention. The study also determined how moderating variables of achievement levels as it affects students' performance in physics when Jigsaw II cooperative learning is used as an instructional strategy. Purposive sampling technique…

  15. High-Performance Computing Data Center | Computational Science | NREL

    Science.gov Websites

    liquid cooling to achieve its very low PUE, then captures and reuses waste heat as the primary heating dry cooler that uses refrigerant in a passive cycle to dissipate heat-is reducing onsite water Measuring efficiency through PUE Warm-water liquid cooling Re-using waste heat from computing components

  16. Attitudes and Opinions from the Nation's High Achieving Teens. 18th Annual Survey of High Achievers.

    ERIC Educational Resources Information Center

    Educational Communications, Inc., Lake Forest, IL.

    This document contains factsheets and news releases which cite findings from a national survey of 1,985 high achieving high school students. Factsheets describe the Who's Who Among American High School Students recognition and service program for high school students and explain the Who's Who survey. A summary report of this eighteenth annual…

  17. Cooperative high-performance storage in the accelerated strategic computing initiative

    NASA Technical Reports Server (NTRS)

    Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark

    1996-01-01

    The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.

  18. The effects of modeling instruction on high school physics academic achievement

    NASA Astrophysics Data System (ADS)

    Wright, Tiffanie L.

    The purpose of this study was to explore whether Modeling Instruction, compared to traditional lecturing, is an effective instructional method to promote academic achievement in selected high school physics classes at a rural middle Tennessee high school. This study used an ex post facto , quasi-experimental research methodology. The independent variables in this study were the instructional methods of teaching. The treatment variable was Modeling Instruction and the control variable was traditional lecture instruction. The Treatment Group consisted of participants in Physical World Concepts who received Modeling Instruction. The Control Group consisted of participants in Physical Science who received traditional lecture instruction. The dependent variable was gains scores on the Force Concepts Inventory (FCI). The participants for this study were 133 students each in both the Treatment and Control Groups (n = 266), who attended a public, high school in rural middle Tennessee. The participants were administered the Force Concepts Inventory (FCI) prior to being taught the mechanics of physics. The FCI data were entered into the computer-based Statistical Package for the Social Science (SPSS). Two independent samples t-tests were conducted to answer the research questions. There was a statistically significant difference between the treatment and control groups concerning the instructional method. Modeling Instructional methods were found to be effective in increasing the academic achievement of students in high school physics. There was no statistically significant difference between FCI gains scores for gender. Gender was found to have no effect on the academic achievement of students in high school physics classes. However, even though there was not a statistically significant difference, female students' gains scores were higher than male students' gains scores when Modeling Instructional methods of teaching were used. Based on these findings, it is recommended

  19. High-Performance Java Codes for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  20. Quantity and Quality of Computer Use and Academic Achievement: Evidence from a Large-Scale International Test Program

    ERIC Educational Resources Information Center

    Cheema, Jehanzeb R.; Zhang, Bo

    2013-01-01

    This study looked at the effect of both quantity and quality of computer use on achievement. The Program for International Student Assessment (PISA) 2003 student survey comprising of 4,356 students (boys, n = 2,129; girls, n = 2,227) was used to predict academic achievement from quantity and quality of computer use while controlling for…

  1. What factors determine academic achievement in high achieving undergraduate medical students? A qualitative study.

    PubMed

    Abdulghani, Hamza M; Al-Drees, Abdulmajeed A; Khalil, Mahmood S; Ahmad, Farah; Ponnamperuma, Gominda G; Amin, Zubair

    2014-04-01

    Medical students' academic achievement is affected by many factors such as motivational beliefs and emotions. Although students with high intellectual capacity are selected to study medicine, their academic performance varies widely. The aim of this study is to explore the high achieving students' perceptions of factors contributing to academic achievement. Focus group discussions (FGD) were carried out with 10 male and 9 female high achieving (scores more than 85% in all tests) students, from the second, third, fourth and fifth academic years. During the FGDs, the students were encouraged to reflect on their learning strategies and activities. The discussion was audio-recorded, transcribed and analysed qualitatively. Factors influencing high academic achievement include: attendance to lectures, early revision, prioritization of learning needs, deep learning, learning in small groups, mind mapping, learning in skills lab, learning with patients, learning from mistakes, time management, and family support. Internal motivation and expected examination results are important drivers of high academic performance. Management of non-academic issues like sleep deprivation, homesickness, language barriers, and stress is also important for academic success. Addressing these factors, which might be unique for a given student community, in a systematic manner would be helpful to improve students' performance.

  2. High fat diet promotes achievement of peak bone mass in young rats.

    PubMed

    Malvi, Parmanand; Piprode, Vikrant; Chaube, Balkrishna; Pote, Satish T; Mittal, Monika; Chattopadhyay, Naibedya; Wani, Mohan R; Bhat, Manoj Kumar

    2014-12-05

    The relationship between obesity and bone is complex. Epidemiological studies demonstrate positive as well as negative correlation between obesity and bone health. In the present study, we investigated the impact of high fat diet-induced obesity on peak bone mass. After 9 months of feeding young rats with high fat diet, we observed obesity phenotype in rats with increased body weight, fat mass, serum triglycerides and cholesterol. There were significant increases in serum total alkaline phosphatase, bone mineral density and bone mineral content. By micro-computed tomography (μ-CT), we observed a trend of better trabecular bones with respect to their microarchitecture and geometry. This indicated that high fat diet helps in achieving peak bone mass and microstructure at younger age. We subsequently shifted rats from high fat diet to normal diet for 6 months and evaluated bone/obesity parameters. It was observed that after shifting rats from high fat diet to normal diet, fat mass, serum triglycerides and cholesterol were significantly decreased. Interestingly, the gain in bone mineral density, bone mineral content and trabecular bone parameters by HFD was retained even after body weight and obesity were normalized. These results suggest that fat rich diet during growth could accelerate achievement of peak bone mass that is sustainable even after withdrawal of high fat diet.

  3. The effects of home computer access and social capital on mathematics and science achievement among Asian-American high school students in the NELS:88 data set

    NASA Astrophysics Data System (ADS)

    Quigley, Mark Declan

    The purpose of this researcher was to examine specific environmental, educational, and demographic factors and their influence on mathematics and science achievement. In particular, the researcher ascertained the interconnections of home computer access and social capital, with Asian American students and the effect on mathematics and science achievement. Coleman's theory on social capital and parental influence was used as a basis for the analysis of data. Subjects for this study were the base year students from the National Education Longitudinal Study of 1988 (NELS:88) and the subsequent follow-up survey data in 1990, 1992, and 1994. The approximate sample size for this study is 640 ethnic Asians from the NELS:88 database. The analysis was a longitudinal study based on the Student and Parent Base Year responses and the Second Follow-up survey of 1992, when the subjects were in 12th grade. Achievement test results from the NELS:88 data were used to measure achievement in mathematics and science. The NELS:88 test battery was developed to measure both individual status and a student's growth in a number of achievement areas. The subject's responses were analyzed by principal components factor analysis, weights, effect sizes, hierarchial regression analysis, and PLSPath Analysis. The results of this study were that prior ability in mathematics and science is a major influence in the student's educational achievement. Findings from the study support the view that home computer access has a negative direct effect on mathematics and science achievement for both Asian American males and females. None of the social capital factors in the study had either a negative or positive direct effect on mathematics and science achievement although some indirect effects were found. Suggestions were made toward increasing parental involvement in their children's academic endeavors. Computer access in the home should be considered related to television viewing and should be closely

  4. Perspectives of High-Achieving Women on Teaching

    ERIC Educational Resources Information Center

    Snodgrass, Helen

    2010-01-01

    High-achieving women are significantly less likely to enter the teaching profession than they were just 40 years ago. Why? While the social and economic reasons for this decline have been well documented in the literature, what is lacking is a discussion with high-achieving women, as they make their first career decisions, about their perceptions…

  5. Effects of Computer Applications on Elementary School Students' Achievement: A Meta-Analysis of Students in Taiwan

    ERIC Educational Resources Information Center

    Liao, Yuen-kuang Cliff; Chang, Huei-wen; Chen, Yu-wen

    2008-01-01

    A meta-analysis was performed to synthesize existing research comparing the effects of computer applications (i.e., computer-assisted instruction, computer simulations, and Web-based learning) versus traditional instruction on elementary school students' achievement in Taiwan. Forty-eight studies were located from four sources, and their…

  6. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  7. Psychosocial Keys to African American Achievement? Examining the Relationship between Achievement and Psychosocial Variables in High Achieving African Americans

    ERIC Educational Resources Information Center

    Dixson, Dante D.; Roberson, Cyrell C. B.; Worrell, Frank C.

    2017-01-01

    Grit, growth mindset, ethnic identity, and other group orientation are four psychosocial variables that have been associated with academic achievement in adolescent populations. In a sample of 105 high achieving African American high school students (cumulative grade point average [GPA] > 3.0), we examined whether these four psychosocial…

  8. 22nd Annual Survey of High Achievers: Attitudes and Opinions from the Nation's High Achieving Teens.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Northbrook, IL.

    This study surveyed high school students (N=1,879) who were student leaders or high achievers in the spring of 1991 for the purpose of determining their attitudes. Students were members of the junior or senior high school class during the 1990-91 academic year and were selected for recognition by their principals or guidance counselors, other…

  9. A Program To Develop through LOGO the Computer Self-Confidence of Seventh Grade Low-Achieving Girls.

    ERIC Educational Resources Information Center

    Angell, Marion D.

    This practicum report describes the development of a program designed to improve self-confidence in low-achieving seventh grade girls towards computers. The questionnaire "My Feelings Towards Computers" was used for pre- and post-comparisons. Students were introduced to the computer program LOGO, were taught to compose programs using the…

  10. High School Mathematics Teachers' Levels of Achieving Technology Integration and In-Class Reflections: The Case of Mathematica

    ERIC Educational Resources Information Center

    Ardiç, Mehmet Alper; Isleyen, Tevfik

    2017-01-01

    The purpose of this study is to determine the levels of high school mathematics teachers in achieving mathematics instruction via computer algebra systems and the reflections of these practices in the classroom. Three high school mathematics teachers employed at different types of school participated in the study. In the beginning of this…

  11. Computer Science Majors: Sex Role Orientation, Academic Achievement, and Social Cognitive Factors

    ERIC Educational Resources Information Center

    Brown, Chris; Garavalia, Linda S.; Fritts, Mary Lou Hines; Olson, Elizabeth A.

    2006-01-01

    This study examined the sex role orientations endorsed by 188 male and female students majoring in computer science, a male-dominated college degree program. The relations among sex role orientation and academic achievement and social cognitive factors influential in career decision-making self-efficacy were explored. Findings revealed that…

  12. Effectiveness of Computer-Assisted STAD Cooperative Learning Strategy on Physics Problem Solving, Achievement and Retention

    ERIC Educational Resources Information Center

    Gambari, Amosa Isiaka; Yusuf, Mudasiru Olalere

    2015-01-01

    This study investigated the effectiveness of computer-assisted Students' Team Achievement Division (STAD) cooperative learning strategy on physics problem solving, students' achievement and retention. It also examined if the student performance would vary with gender. Purposive sampling technique was used to select two senior secondary schools…

  13. 21st Annual Survey of High Achievers: Attitudes and Opinions from the Nation's High Achieving Teens.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Lake Forest, IL.

    This survey was conducted by Who's Who Among American High School Students during the spring of 1990, to determine the attitudes of student leaders in U.S. high schools. A survey of high achievers sent to 5,000 students was completed and returned by approximately 2,000 students. All students were members of the junior or senior class during the…

  14. Attitudes and Opinions from the Nation's High Achieving Teens: 26th Annual Survey of High Achievers.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Lake Forest, IL.

    A national survey of 3,351 high achieving high school students (junior and senior level) was conducted. All students had A or B averages. Topics covered include lifestyles, political beliefs, violence and entertainment, education, cheating, school violence, sexual violence and date rape, peer pressure, popularity, suicide, drugs and alcohol,…

  15. Attitudes and Opinions from the Nation's High Achieving Teens. 24th Annual Survey of High Achievers.

    ERIC Educational Resources Information Center

    Who's Who among American High School Students, Lake Forest, IL.

    This survey represents information compiled by the largest national survey of adolescent leaders and high achievers. Of the 5,000 students selected demographically from "Who's Who Among American High School Students," 1,957 responded. All students surveyed had "A" or "B" averages, and 98% planned on attending college. Questions were asked about…

  16. Computer Ratio and Student Achievement in Reading and Math in a North Carolina School District

    ERIC Educational Resources Information Center

    Preswood, Erica

    2017-01-01

    This longitudinal research project explored the relationship between a 1:1 computing initiative and student achievement on the North Carolina End of Grade Reading Comprehension and Math tests in the study school district. The purpose of this research study was to determine if the implementation of a 1:1 computing initiative impacted student…

  17. Scalable Multiprocessor for High-Speed Computing in Space

    NASA Technical Reports Server (NTRS)

    Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard

    2004-01-01

    A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.

  18. Student Perceptions of High-Achieving Classmates

    ERIC Educational Resources Information Center

    Händel, Marion; Vialle, Wilma; Ziegler, Albert

    2013-01-01

    The reported study investigated students' perceptions of their high-performing classmates in terms of intelligence, social skills, and conscientiousness in different school subjects. The school subjects for study were examined with regard to cognitive, physical, and gender-specific issues. The results show that high academic achievements in…

  19. Self-Concept and Achievement Motivation of High School Students

    ERIC Educational Resources Information Center

    Lawrence, A. S. Arul; Vimala, A.

    2013-01-01

    The present study "Self-concept and Achievement Motivation of High School Students" was investigated to find the relationship between Self-concept and Achievement Motivation of High School Students. Data for the study were collected using Self-concept Questionnaire developed by Raj Kumar Saraswath (1984) and Achievement Motive Test (ACMT)…

  20. Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors

    PubMed Central

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena

    2013-01-01

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804

  1. Computational burden resulting from image recognition of high resolution radar sensors.

    PubMed

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena

    2013-04-22

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.

  2. High performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  3. The effectiveness of computer-managed instruction versus traditional classroom lecture on achievement outcomes.

    PubMed

    Schmidt, S M; Arndt, M J; Gaston, S; Miller, B J

    1991-01-01

    This controlled experimental study examines the effect of two teaching methods on achievement outcomes from a 15-week, 2 credit hour semester course taught at two midwestern universities. Students were randomly assigned to either computer-managed instruction in which faculty function as tutors or the traditional classroom course of study. In addition, the effects of age, grade point average, attitudes toward computers, and satisfaction with the course on teaching method were analyzed using analysis of covariance. Younger students achieved better scores than did older students. Regardless of teaching method, however, neither method appeared to be better than the other for teaching course content. Students did not prefer one method over the other as indicated by their satisfaction scores. With demands upon university faculty to conduct research and publish, alternative methods of teaching that free faculty from the classroom should be considered. This study suggests that educators can select such an alternative teaching method to traditional classroom teaching without sacrificing quality education for certain courses.

  4. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  5. The Effect of Interactive e-Book on Students' Achievement at Najran University in Computer in Education Course

    ERIC Educational Resources Information Center

    Ebied, Mohammed Mohammed Ahmed; Rahman, Shimaa Ahmed Abdul

    2015-01-01

    The current study aims to examine the effect of interactive e-books on students' achievement at Najran University in computer in education course. Quasi-experimental study design is used in the study and to collect data the researchers built achievement test to measure the dependent variable represented in the achievement affected by experimental…

  6. The Relationship between Self-Esteem and Academic Achievement in a Group of High, Medium, and Low Secondary Public High School Achievers.

    ERIC Educational Resources Information Center

    Thomas-Brantley, Betty J.

    This study investigated the relationship between self-esteem and academic achievement in a group of 150 high, medium, and low achievers at a large midwestern public high school. Correlating data from the Coopersmith Inventory of self-esteem with grades, cumulative grade point averages, and class rank, the study disclosed a positive correlation…

  7. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Ashley D.; Bernholdt, David E.; Bland, Arthur S.

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatestmore » number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for

  8. Large Scale Computing and Storage Requirements for High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report

  9. High-End Scientific Computing

    EPA Pesticide Factsheets

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  10. Review of "High-Achieving Students in the Era of NCLB"

    ERIC Educational Resources Information Center

    Camilli, Gregory

    2008-01-01

    A recent report from the Fordham Institute considers potential instructional policies for high-achieving students that should be considered in the forthcoming reauthorization of the No Child Left Behind Act. The report finds: 1) achievement growth among high-achieving students has been slower than that of low-achieving students; 2) this trend can…

  11. A heterogeneous and parallel computing framework for high-resolution hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Smith, Luke; Liang, Qiuhua

    2015-04-01

    Shock-capturing hydrodynamic models are now widely applied in the context of flood risk assessment and forecasting, accurately capturing the behaviour of surface water over ground and within rivers. Such models are generally explicit in their numerical basis, and can be computationally expensive; this has prohibited full use of high-resolution topographic data for complex urban environments, now easily obtainable through airborne altimetric surveys (LiDAR). As processor clock speed advances have stagnated in recent years, further computational performance gains are largely dependent on the use of parallel processing. Heterogeneous computing architectures (e.g. graphics processing units or compute accelerator cards) provide a cost-effective means of achieving high throughput in cases where the same calculation is performed with a large input dataset. In recent years this technique has been applied successfully for flood risk mapping, such as within the national surface water flood risk assessment for the United Kingdom. We present a flexible software framework for hydrodynamic simulations across multiple processors of different architectures, within multiple computer systems, enabled using OpenCL and Message Passing Interface (MPI) libraries. A finite-volume Godunov-type scheme is implemented using the HLLC approach to solving the Riemann problem, with optional extension to second-order accuracy in space and time using the MUSCL-Hancock approach. The framework is successfully applied on personal computers and a small cluster to provide considerable improvements in performance. The most significant performance gains were achieved across two servers, each containing four NVIDIA GPUs, with a mix of K20, M2075 and C2050 devices. Advantages are found with respect to decreased parametric sensitivity, and thus in reducing uncertainty, for a major fluvial flood within a large catchment during 2005 in Carlisle, England. Simulations for the three-day event could be performed

  12. Effect of Computer-Assisted Instruction on Secondary School Students' Achievement in Ecological Concepts

    ERIC Educational Resources Information Center

    Nkemdilim, Egbunonu Roseline; Okeke, Sam O. C.

    2014-01-01

    This study investigated the effects of computer-assisted instruction (CAI) on students' achievement in ecological concepts. Quasi-experimental design, specifically the pre-test post test non-equivalent control group design was adopted. The sample consisted of sixty-six (66) senior secondary year two (SS II) biology students, drawn from two…

  13. Factors Impacting Adult Learner Achievement in a Technology Certificate Program on Computer Networks

    ERIC Educational Resources Information Center

    Delialioglu, Omer; Cakir, Hasan; Bichelmeyer, Barbara A.; Dennis, Alan R.; Duffy, Thomas M.

    2010-01-01

    This study investigates the factors impacting the achievement of adult learners in a technology certificate program on computer networks. We studied 2442 participants in 256 institutions. The participants were older than age 18 and were enrolled in the Cisco Certified Network Associate (CCNA) technology training program as "non-degree" or…

  14. Relationships among Taiwanese Children's Computer Game Use, Academic Achievement and Parental Governing Approach

    ERIC Educational Resources Information Center

    Yeh, Duen-Yian; Cheng, Ching-Hsue

    2016-01-01

    This study examined the relationships among children's computer game use, academic achievement and parental governing approach to propose probable answers for the doubts of Taiwanese parents. 355 children (ages 11-14) were randomly sampled from 20 elementary schools in a typically urbanised county in Taiwan. Questionnaire survey (five questions)…

  15. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will

  16. The Influence of Achievement Goals on Online Help Seeking of Computer Science Students

    ERIC Educational Resources Information Center

    Hao, Qiang; Barnes, Brad; Wright, Ewan; Branch, Robert Maribe

    2017-01-01

    This study investigated the online help-seeking behaviors of computer science students with a focus on the effect of achievement goals. The online help-seeking behaviors investigated were online searching, asking teachers online for help, and asking peers or unknown people online for help. One hundred and sixty-five students studying computer…

  17. Effects of Computer Animation Instructional Package on Students' Achievement in Practical Biology

    ERIC Educational Resources Information Center

    Hamzat, Abdulrasaq; Bello, Ganiyu; Abimbola, Isaac Olakanmi

    2017-01-01

    This study examined the effects of computer animation instructional package on secondary school students' achievement in practical biology in Ilorin, Nigeria. The study adopted a pre-test, post-test, control group, non-randomised and nonequivalent quasi-experimental design, with a 2x2x3 factorial design. Two intact classes from two secondary…

  18. A vectorized Lanczos eigensolver for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1990-01-01

    The computational strategies used to implement a Lanczos-based-method eigensolver on the latest generation of supercomputers are described. Several examples of structural vibration and buckling problems are presented that show the effects of using optimization techniques to increase the vectorization of the computational steps. The data storage and access schemes and the tools and strategies that best exploit the computer resources are presented. The method is implemented on the Convex C220, the Cray 2, and the Cray Y-MP computers. Results show that very good computation rates are achieved for the most computationally intensive steps of the Lanczos algorithm and that the Lanczos algorithm is many times faster than other methods extensively used in the past.

  19. Effects of Computer Based Learning on Students' Attitudes and Achievements towards Analytical Chemistry

    ERIC Educational Resources Information Center

    Akcay, Hüsamettin; Durmaz, Asli; Tüysüz, Cengiz; Feyzioglu, Burak

    2006-01-01

    The aim of this study was to compare the effects of computer-based learning and traditional method on students' attitudes and achievement towards analytical chemistry. Students from Chemistry Education Department at Dokuz Eylul University (D.E.U) were selected randomly and divided into three groups; two experimental (Eg-1 and Eg-2) and a control…

  20. Academic attainment and the high school science experiences among high-achieving African American males

    NASA Astrophysics Data System (ADS)

    Trice, Rodney Nathaniel

    This study examines the educational experiences of high achieving African American males. More specifically, it analyzes the influences on their successful navigation through high school science. Through a series of interviews, observations, questionnaires, science portfolios, and review of existing data the researcher attempted to obtain a deeper understanding of high achieving African American males and their limitations to academic attainment and high school science experiences. The investigation is limited to ten high achieving African American male science students at Woodcrest High School. Woodcrest is situated at the cross section of a suburban and rural community located in the southeastern section of the United States. Although this investigation involves African American males, all of whom are successful in school, its findings should not be generalized to this nor any other group of students. The research question that guided this study is: What are the limitations to academic attainment and the high school science experiences of high achieving African American males? The student participants expose how suspension and expulsion, special education placement, academic tracking, science instruction, and teacher expectation influence academic achievement. The role parents play, student self-concept, peer relationships, and student learning styles are also analyzed. The anthology of data rendered three overarching themes: (1) unequal access to education, (2) maintenance of unfair educational structures, and (3) authentic characterizations of African American males. Often the policies and practices set in place by school officials aid in creating hurdles to academic achievement. These policies and practices are often formed without meaningful consideration of the unintended consequences that may affect different student populations, particularly the most vulnerable. The findings from this study expose that high achieving African American males face major

  1. High Speed Computational Ghost Imaging via Spatial Sweeping

    NASA Astrophysics Data System (ADS)

    Wang, Yuwang; Liu, Yang; Suo, Jinli; Situ, Guohai; Qiao, Chang; Dai, Qionghai

    2017-03-01

    Computational ghost imaging (CGI) achieves single-pixel imaging by using a Spatial Light Modulator (SLM) to generate structured illuminations for spatially resolved information encoding. The imaging speed of CGI is limited by the modulation frequency of available SLMs, and sets back its practical applications. This paper proposes to bypass this limitation by trading off SLM’s redundant spatial resolution for multiplication of the modulation frequency. Specifically, a pair of galvanic mirrors sweeping across the high resolution SLM multiply the modulation frequency within the spatial resolution gap between SLM and the final reconstruction. A proof-of-principle setup with two middle end galvanic mirrors achieves ghost imaging as fast as 42 Hz at 80 × 80-pixel resolution, 5 times faster than state-of-the-arts, and holds potential for one magnitude further multiplication by hardware upgrading. Our approach brings a significant improvement in the imaging speed of ghost imaging and pushes ghost imaging towards practical applications.

  2. Setting Educational Priorities: High Achievers Speak Out. White Paper.

    ERIC Educational Resources Information Center

    Dickeson, Robert C.

    Noting that high achieving Indiana high school students can provide important insights into the educational system in the state, this study examined the opinions of recipients of Ameritchieve recognition, National Merit finalists, African-American students who were National Achievement finalists, and national Hispanic Scholar finalists, all from…

  3. High-Performance Computing and Visualization | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system

  4. High-order computational fluid dynamics tools for aircraft design

    PubMed Central

    Wang, Z. J.

    2014-01-01

    Most forecasts predict an annual airline traffic growth rate between 4.5 and 5% in the foreseeable future. To sustain that growth, the environmental impact of aircraft cannot be ignored. Future aircraft must have much better fuel economy, dramatically less greenhouse gas emissions and noise, in addition to better performance. Many technical breakthroughs must take place to achieve the aggressive environmental goals set up by governments in North America and Europe. One of these breakthroughs will be physics-based, highly accurate and efficient computational fluid dynamics and aeroacoustics tools capable of predicting complex flows over the entire flight envelope and through an aircraft engine, and computing aircraft noise. Some of these flows are dominated by unsteady vortices of disparate scales, often highly turbulent, and they call for higher-order methods. As these tools will be integral components of a multi-disciplinary optimization environment, they must be efficient to impact design. Ultimately, the accuracy, efficiency, robustness, scalability and geometric flexibility will determine which methods will be adopted in the design process. This article explores these aspects and identifies pacing items. PMID:25024419

  5. Global Magnetohydrodynamic Simulation Using High Performance FORTRAN on Parallel Computers

    NASA Astrophysics Data System (ADS)

    Ogino, T.

    High Performance Fortran (HPF) is one of modern and common techniques to achieve high performance parallel computation. We have translated a 3-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF/JA on the Fujitsu VPP5000/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A 3-dimensional global MHD simulation of the earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5 VPP5000/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF/JA, and a code in HPF/JA may be expected to perform comparably to the same code written in VPP Fortran.

  6. Beyond intuitive anthropomorphic control: recent achievements using brain computer interface technologies

    NASA Astrophysics Data System (ADS)

    Pohlmeyer, Eric A.; Fifer, Matthew; Rich, Matthew; Pino, Johnathan; Wester, Brock; Johannes, Matthew; Dohopolski, Chris; Helder, John; D'Angelo, Denise; Beaty, James; Bensmaia, Sliman; McLoughlin, Michael; Tenore, Francesco

    2017-05-01

    Brain-computer interface (BCI) research has progressed rapidly, with BCIs shifting from animal tests to human demonstrations of controlling computer cursors and even advanced prosthetic limbs, the latter having been the goal of the Revolutionizing Prosthetics (RP) program. These achievements now include direct electrical intracortical microstimulation (ICMS) of the brain to provide human BCI users feedback information from the sensors of prosthetic limbs. These successes raise the question of how well people would be able to use BCIs to interact with systems that are not based directly on the body (e.g., prosthetic arms), and how well BCI users could interpret ICMS information from such devices. If paralyzed individuals could use BCIs to effectively interact with such non-anthropomorphic systems, it would offer them numerous new opportunities to control novel assistive devices. Here we explore how well a participant with tetraplegia can detect infrared (IR) sources in the environment using a prosthetic arm mounted camera that encodes IR information via ICMS. We also investigate how well a BCI user could transition from controlling a BCI based on prosthetic arm movements to controlling a flight simulator, a system with different physical dynamics than the arm. In that test, the BCI participant used environmental information encoded via ICMS to identify which of several upcoming flight routes was the best option. For both tasks, the BCI user was able to quickly learn how to interpret the ICMSprovided information to achieve the task goals.

  7. Comparing the Effect of Two Types of Computer Screen Background Lighting on Students' Reading Engagement and Achievement

    ERIC Educational Resources Information Center

    Botello, Jennifer A.

    2014-01-01

    With increased dependence on computer-based standardized tests to assess academic achievement, technological literacy has become an essential skill. Yet, because students have unequal access to technology, they may not have equal opportunities to perform well on these computer-based tests. The researcher had observed students taking the STAR…

  8. Supporting High School Student Accomplishment of Biology Content Using Interactive Computer-Based Curricular Case Studies

    NASA Astrophysics Data System (ADS)

    Oliver, Joseph Steve; Hodges, Georgia W.; Moore, James N.; Cohen, Allan; Jang, Yoonsun; Brown, Scott A.; Kwon, Kyung A.; Jeong, Sophia; Raven, Sara P.; Jurkiewicz, Melissa; Robertson, Tom P.

    2017-11-01

    Research into the efficacy of modules featuring dynamic visualizations, case studies, and interactive learning environments is reported here. This quasi-experimental 2-year study examined the implementation of three interactive computer-based instructional modules within a curricular unit covering cellular biology concepts in an introductory high school biology course. The modules featured dynamic visualizations and focused on three processes that underlie much of cellular biology: diffusion, osmosis, and filtration. Pre-tests and post-tests were used to assess knowledge growth across the unit. A mixture Rasch model analysis of the post-test data revealed two groups of students. In both years of the study, a large proportion of the students were classified as low-achieving based on their pre-test scores. The use of the modules in the Cell Unit in year 2 was associated with a much larger proportion of the students having transitioned to the high-achieving group than in year 1. In year 2, the same teachers taught the same concepts as year 1 but incorporated the interactive computer-based modules into the cell biology unit of the curriculum. In year 2, 67% of students initially classified as low-achieving were classified as high-achieving at the end of the unit. Examination of responses to assessments embedded within the modules as well as post-test items linked transition to the high-achieving group with correct responses to items that both referenced the visualization and the contextualization of that visualization within the module. This study points to the importance of dynamic visualization within contextualized case studies as a means to support student knowledge acquisition in biology.

  9. Exploring high-achieving sixth grade students' erroneous answers and misconceptions on the angle concept

    NASA Astrophysics Data System (ADS)

    Bütüner, Suphi Önder; Filiz, Mehmet

    2017-05-01

    The aim of this research was to investigate high achievers' erroneous answers and misconceptions on the angle concept. The participants consisted of 233 grade 6 students drawn from eight classes in two well-established elementary schools of Trabzon, Turkey. All the participants were considered to be current achievers in mathematics, graded 4 or 5 out of 5, and selected via a purposive sampling method. Data were collected through six questions reflecting the learning competencies set out in the grade 6 curriculum in Turkey and the findings of previous studies that aimed to identify students' misconceptions of the angle concept. This questionnaire was then applied over a 40-minute period in each class. The findings were analysed by two researchers whose inter-rater agreement was computed as 0.97, or almost perfect. Thereafter, coding discrepancies were resolved, and consensus was established. We found that although the participants in this study were high achievers, they still held several misconceptions on the angle concept such as recognizing a straight angle or a right angle in different orientations. We also show how some of these misconceptions could have arisen due to the definitions or representations used in the textbook, and offer suggestions concerning their content in the future.

  10. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    ERIC Educational Resources Information Center

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  11. Problem-Based Learning Environment in Basic Computer Course: Pre-Service Teachers' Achievement and Key Factors for Learning

    ERIC Educational Resources Information Center

    Efendioglu, Akin

    2015-01-01

    This experimental study aims to determine pre-service teachers' achievements and key factors that affect the learning process with regard to problem-based learning (PBL) and lecture-based computer course (LBCC) conditions. The research results showed that the pre-service teachers in the PBL group had significantly higher achievement scores than…

  12. INSPIRED High School Computing Academies

    ERIC Educational Resources Information Center

    Doerschuk, Peggy; Liu, Jiangjiang; Mann, Judith

    2011-01-01

    If we are to attract more women and minorities to computing we must engage students at an early age. As part of its mission to increase participation of women and underrepresented minorities in computing, the Increasing Student Participation in Research Development Program (INSPIRED) conducts computing academies for high school students. The…

  13. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  14. Linear-array based full-view high-resolution photoacoustic computed tomography of whole mouse brain functions in vivo

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Pengfei; Wang, Lihong V.

    2018-02-01

    Photoacoustic computed tomography (PACT) is a non-invasive imaging technique offering high contrast, high resolution, and deep penetration in biological tissues. We report a photoacoustic computed tomography (PACT) system equipped with a high frequency linear array for anatomical and functional imaging of the mouse whole brain. The linear array was rotationally scanned in the coronal plane to achieve the full-view coverage. We investigated spontaneous neural activities in the deep brain by monitoring the hemodynamics and observed strong interhemispherical correlations between contralateral regions, both in the cortical layer and in the deep regions.

  15. The Effects of Implementing a Computer-Based Reading Support Program on the Reading Achievement of Sixth Graders

    ERIC Educational Resources Information Center

    Falke, Tricia Rae

    2012-01-01

    The purpose of this study was to determine the effects of a computer-based reading intervention on the reading achievement of sixth grade students in one elementary school in a suburban school district located in the Midwest region of the United States. Data were collected through two district mandated reading assessments and a computer-based…

  16. Form One Students' Engagement with Computer Games and Its Effect on Their Academic Achievement in a Malaysian Secondary School

    ERIC Educational Resources Information Center

    Eow, Yee Leng; Wan Ali, Wan Zah bte; Mahmud, Rosnaini bt.; Baki, Roselan

    2009-01-01

    The main purpose of the study was to address the association between computer games and students' academic achievement. The exceptional growth in numbers of children playing computer games, the uneasiness and incomplete understanding foundation when starting the discussion on computer games have stimulated this study to be conducted. From a survey…

  17. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  18. Effects of Computer-Assisted STAD, LTM and ICI Cooperative Learning Strategies on Nigerian Secondary School Students' Achievement, Gender and Motivation in Physics

    ERIC Educational Resources Information Center

    Gambrari, Isiaka Amosa; Yusuf, Mudasiru Olalere; Thomas, David Akpa

    2015-01-01

    This study examined the effectiveness of computer-assisted instruction on Student Team Achievement Division (STAD) and Learning Together Model (LTM) cooperative learning strategies on Nigerian secondary students' achievement and motivation in physics. The efficacy of Authors developed computer assisted instructional package (CAI) for teaching…

  19. Effects of Computer-Assisted STAD, LTM and ICI Cooperative Learning Strategies on Nigerian Secondary School Students' Achievement, Gender and Motivation in Physics

    ERIC Educational Resources Information Center

    Gambari, Amosa Isiaka; Yusuf, Mudasiru Olalere; Thomas, David Akpa

    2015-01-01

    This study examined the effectiveness of computer-assisted instruction on Student Team Achievement Division (STAD) and Learning Together (LT) cooperative learning strategies on Nigerian secondary students' achievement and motivation in physics. The effectiveness of computer assisted instructional package (CAI) for teaching physics concepts in…

  20. Comparative Effects of Two Modes of Computer-Assisted Instructional Package on Solid Geometry Achievement

    ERIC Educational Resources Information Center

    Gambari, Isiaka Amosa; Ezenwa, Victoria Ifeoma; Anyanwu, Romanus Chogozie

    2014-01-01

    The study examined the effects of two modes of computer-assisted instructional package on solid geometry achievement amongst senior secondary school students in Minna, Niger State, Nigeria. Also, the influence of gender on the performance of students exposed to CAI(AT) and CAI(AN) packages were examined. This study adopted a pretest-posttest…

  1. Experiences of High-Achieving High School Students Who Have Taken Multiple Concurrent Advanced Placement Courses

    ERIC Educational Resources Information Center

    Milburn, Kristine M.

    2011-01-01

    Problem: An increasing number of high-achieving American high school students are enrolling in multiple Advanced Placement (AP) courses. As a result, high schools face a growing need to understand the impact of taking multiple AP courses concurrently on the social-emotional lives of high-achieving students. Procedures: This phenomenological…

  2. WDM package enabling high-bandwidth optical intrasystem interconnects for high-performance computer systems

    NASA Astrophysics Data System (ADS)

    Schrage, J.; Soenmez, Y.; Happel, T.; Gubler, U.; Lukowicz, P.; Mrozynski, G.

    2006-02-01

    From long haul, metro access and intersystem links the trend goes to applying optical interconnection technology at increasingly shorter distances. Intrasystem interconnects such as data busses between microprocessors and memory blocks are still based on copper interconnects today. This causes a bottleneck in computer systems since the achievable bandwidth of electrical interconnects is limited through the underlying physical properties. Approaches to solve this problem by embedding optical multimode polymer waveguides into the board (electro-optical circuit board technology, EOCB) have been reported earlier. The principle feasibility of optical interconnection technology in chip-to-chip applications has been validated in a number of projects. For reasons of cost considerations waveguides with large cross sections are used in order to relax alignment requirements and to allow automatic placement and assembly without any active alignment of components necessary. On the other hand the bandwidth of these highly multimodal waveguides is restricted due to mode dispersion. The advance of WDM technology towards intrasystem applications will provide sufficiently high bandwidth which is required for future high-performance computer systems: Assuming that, for example, 8 wavelength-channels with 12Gbps (SDR1) each are given, then optical on-board interconnects with data rates a magnitude higher than the data rates of electrical interconnects for distances typically found at today's computer boards and backplanes can be realized. The data rate will be twice as much, if DDR2 technology is considered towards the optical signals as well. In this paper we discuss an approach for a hybrid integrated optoelectronic WDM package which might enable the application of WDM technology to EOCB.

  3. The Effects of Cooperative and Individualistic Learning Structures on Achievement in a College-Level Computer-Aided Drafting Course

    ERIC Educational Resources Information Center

    Swab, A. Geoffrey

    2012-01-01

    This study of cooperative learning in post-secondary engineering education investigated achievement of engineering students enrolled in two intact sections of a computer-aided drafting (CAD) course. Quasi-experimental and qualitative methods were employed in comparing student achievement resulting from out-of-class cooperative and individualistic…

  4. Scientific Temper among Academically High and Low Achieving Adolescent Girls

    ERIC Educational Resources Information Center

    Kour, Sunmeet

    2015-01-01

    The present study was undertaken to compare the scientific temper of high and low achieving adolescent girl students. Random sampling technique was used to draw the sample from various high schools of District Srinagar. The sample for the present study consisted of 120 school going adolescent girls (60 high and 60 low achievers). Data was…

  5. Exploring High-Achieving Students' Images of Mathematicians

    ERIC Educational Resources Information Center

    Aguilar, Mario Sánchez; Rosas, Alejandro; Zavaleta, Juan Gabriel Molina; Romo-Vázquez, Avenilde

    2016-01-01

    The aim of this study is to describe the images that a group of high-achieving Mexican students hold of mathematicians. For this investigation, we used a research method based on the Draw-A-Scientist Test (DAST) with a sample of 63 Mexican high school students. The group of students' pictorial and written descriptions of mathematicians assisted us…

  6. Effect of Computer-Based Multimedia Presentation on Senior Secondary Students' Achievement in Agricultural Science

    ERIC Educational Resources Information Center

    Olori, Abiola Lateef; Igbosanu, Adekunle Olusegun

    2016-01-01

    The study was carried out to determine the use of computer-based multimedia presentation on Senior Secondary School Students' Achievement in Agricultural Science. The study was a quasi-experimental, pre-test, post-test control group research design type, using intact classes. A sample of eighty (80) Senior Secondary School One (SS II) students was…

  7. Instruction of Statistics via Computer-Based Tools: Effects on Statistics' Anxiety, Attitude, and Achievement

    ERIC Educational Resources Information Center

    Ciftci, S. Koza; Karadag, Engin; Akdal, Pinar

    2014-01-01

    The purpose of this study was to determine the effect of statistics instruction using computer-based tools, on statistics anxiety, attitude, and achievement. This study was designed as quasi-experimental research and the pattern used was a matched pre-test/post-test with control group design. Data was collected using three scales: a Statistics…

  8. The Effects of 3D Computer Simulation on Biology Students' Achievement and Memory Retention

    ERIC Educational Resources Information Center

    Elangovan, Tavasuria; Ismail, Zurida

    2014-01-01

    A quasi experimental study was conducted for six weeks to determine the effectiveness of two different 3D computer simulation based teaching methods, that is, realistic simulation and non-realistic simulation on Form Four Biology students' achievement and memory retention in Perak, Malaysia. A sample of 136 Form Four Biology students in Perak,…

  9. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  10. High-performance computing-based exploration of flow control with micro devices.

    PubMed

    Fujii, Kozo

    2014-08-13

    The dielectric barrier discharge (DBD) plasma actuator that controls flow separation is one of the promising technologies to realize energy savings and noise reduction of fluid dynamic systems. However, the mechanism for controlling flow separation is not clearly defined, and this lack of knowledge prevents practical use of this technology. Therefore, large-scale computations for the study of the DBD plasma actuator have been conducted using the Japanese Petaflops supercomputer 'K' for three different Reynolds numbers. Numbers of new findings on the control of flow separation by the DBD plasma actuator have been obtained from the simulations, and some of them are presented in this study. Knowledge of suitable device parameters is also obtained. The DBD plasma actuator is clearly shown to be very effective for controlling flow separation at a Reynolds number of around 10(5), and several times larger lift-to-drag ratio can be achieved at higher angles of attack after stall. For higher Reynolds numbers, separated flow is partially controlled. Flow analysis shows key features towards better control. DBD plasma actuators are a promising technology, which could reduce fuel consumption and contribute to a green environment by achieving high aerodynamic performance. The knowledge described above can be obtained only with high-end computers such as the supercomputer 'K'. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. Effectiveness of Computer Animation and Geometrical Instructional Model on Mathematics Achievement and Retention among Junior Secondary School Students

    ERIC Educational Resources Information Center

    Gambari, A. I.; Falode, C. O.; Adegbenro, D. A.

    2014-01-01

    This study investigated the effectiveness of computer animation and geometry instructional model on mathematics achievement and retention on Junior Secondary School Students in Minna, Nigeria. It also examined the influence of gender on students' achievement and retention. The research was a pre-test post-test experimental and control group…

  12. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  13. Achieving high performance on the Intel Paragon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, D.S.; Maccabe, B.; Riesen, R.

    1993-11-01

    When presented with a new supercomputer most users will first ask {open_quotes}How much faster will my applications run?{close_quotes} and then add a fearful {open_quotes}How much effort will it take me to convert to the new machine?{close_quotes} This paper describes some lessons learned at Sandia while asking these questions about the new 1800+ node Intel Paragon. The authors conclude that the operating system is crucial to both achieving high performance and allowing easy conversion from previous parallel implementations to a new machine. Using the Sandia/UNM Operating System (SUNMOS) they were able to port a LU factorization of dense matrices from themore » nCUBE2 to the Paragon and achieve 92% scaled speed-up on 1024 nodes. Thus on a 44,000 by 44,000 matrix which had required over 10 hours on the previous machine, they completed in less than 1/2 hour at a rate of over 40 GFLOPS. Two keys to achieving such high performance were the small size of SUNMOS (less than 256 kbytes) and the ability to send large messages with very low overhead.« less

  14. Examining the Difference in Student Achievement between Face-to-Face and Online Computer Classes

    ERIC Educational Resources Information Center

    Hearn, Phillips Turner

    2017-01-01

    The purpose of this study was to compare the achievement of students taking a computer applications class in one of two instructional methods, traditional face-to-face and online, at a Southeastern community college. The research questions examined more than 3,000 samples from the summer of 2012 through the spring semester of 2016. There were…

  15. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  16. The Meaning High-Achieving African-American Males in an Urban High School Ascribe to Mathematics

    ERIC Educational Resources Information Center

    Thompson, LaTasha; Davis, Julius

    2013-01-01

    Many researchers, educators, administrators, policymakers and members of the general public doubt the prevalence of high-achieving African-American males in urban high schools capable of excelling in mathematics. As part of a larger study, the current study explored the educational experiences of four high-achieving African-American males…

  17. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  18. Computing in high-energy physics

    DOE PAGES

    Mount, Richard P.

    2016-05-31

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  19. Computing in high-energy physics

    NASA Astrophysics Data System (ADS)

    Mount, Richard P.

    2016-04-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  20. Computing in high-energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mount, Richard P.

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  1. High-Achieving High School Students and Not so High-Achieving College Students: A Look at Lack of Self-Control, Academic Ability, and Performance in College

    ERIC Educational Resources Information Center

    Honken, Nora B.; Ralston, Patricia A. S.

    2013-01-01

    This study investigated the relationship among lack of self-control, academic ability, and academic performance for a cohort of freshman engineering students who were, with a few exceptions, extremely high achievers in high school. Structural equation modeling analysis led to the conclusion that lack of self-control in high school, as measured by…

  2. High Performance Computing Meets Energy Efficiency - Continuum Magazine |

    Science.gov Websites

    NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data

  3. Real-time Tsunami Inundation Prediction Using High Performance Computers

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  4. High Achievement in Mathematics Education in India: A Report from Mumbai

    ERIC Educational Resources Information Center

    Raman, Manya

    2010-01-01

    This paper reports a study aimed at characterizing the conditions that lead to high achievement in mathematics in India. The study involved eight schools in the greater Mumbai region. The main result of the study is that the notion of high achievement itself is problematic, as reflected in the reports about mathematics achievement within and…

  5. Bifocal computational near eye light field displays and Structure parameters determination scheme for bifocal computational display.

    PubMed

    Liu, Mali; Lu, Chihao; Li, Haifeng; Liu, Xu

    2018-02-19

    We propose a bifocal computational near eye light field display (bifocal computational display) and structure parameters determination scheme (SPDS) for bifocal computational display that achieves greater depth of field (DOF), high resolution, accommodation and compact form factor. Using a liquid varifocal lens, two single-focal computational light fields are superimposed to reconstruct a virtual object's light field by time multiplex and avoid the limitation on high refresh rate. By minimizing the deviation between reconstructed light field and original light field, we propose a determination framework to determine the structure parameters of bifocal computational light field display. When applied to different objective to SPDS, it can achieve high average resolution or uniform resolution display over scene depth range. To analyze the advantages and limitation of our proposed method, we have conducted simulations and constructed a simple prototype which comprises a liquid varifocal lens, dual-layer LCDs and a uniform backlight. The results of simulation and experiments with our method show that the proposed system can achieve expected performance well. Owing to the excellent performance of our system, we motivate bifocal computational display and SPDS to contribute to a daily-use and commercial virtual reality display.

  6. High Involvement Mothers of High Achieving Children: Potential Theoretical Explanations

    ERIC Educational Resources Information Center

    Hunsaker, Scott L.

    2013-01-01

    In American society, parents who have high aspirations for the achievements of their children are often viewed by others in a negative light. Various pejoratives such as "pushy parent," "helicopter parent," "stage mother," and "soccer mom" are used in the common vernacular to describe these parents. Multiple…

  7. Gender Differences in Attitudes toward Mathematics between Low-Achieving and High-Achieving Fifth Grade Elementary Students.

    ERIC Educational Resources Information Center

    Rathbone, A. Sue

    Possible gender differences in attitudes toward mathematics were studied between low-achieving and high-achieving fifth-grade students in selected elementary schools within a large, metropolitan area. The attitudes of pre-adolescent children at an intermediate grade level were assessed to determine the effects of rapidly emerging gender-related…

  8. Effects of Partner's Ability on the Achievement and Conceptual Organization of High-Achieving Fifth-Grade Students.

    ERIC Educational Resources Information Center

    Carter, Glenda; Jones, M. Gail; Rua, Melissa

    2003-01-01

    Investigates high-achieving fifth-grade students' achievement gains and conceptual reorganization on convection. Features an instructional sequence of three dyadic inquiry investigations related to convection currents as well as pre- and post-assessment consisting of a multiple-choice test, a card sorting task, construction of a concept map, and…

  9. Do the Effects of Computer-Assisted Practice Differ for Children with Reading Disabilities with and without IQ-Achievement Discrepancy?

    ERIC Educational Resources Information Center

    Jimenez, Juan E.; Ortiz, Maria del Rosario; Rodrigo, Mercedes; Hernandez-Valle, Isabel; Ramirez, Gustavo; Estevez, Adelina; O'Shanahan, Isabel; Trabaue, Maria de la Luz

    2003-01-01

    A study assessed whether the effects of computer-assisted practice on visual word recognition differed for 73 Spanish children with reading disabilities with or without aptitude-achievement discrepancy. Computer-assisted intervention improved word recognition. However, children with dyslexia had more difficulties than poor readers during…

  10. High-End Computing for Incompressible Flows

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, Cetin

    2001-01-01

    The objective of the First MIT Conference on Computational Fluid and Solid Mechanics (June 12-14, 2001) is to bring together industry and academia (and government) to nurture the next generation in computational mechanics. The objective of the current talk, 'High-End Computing for Incompressible Flows', is to discuss some of the current issues in large scale computing for mission-oriented tasks.

  11. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    ERIC Educational Resources Information Center

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.

    2013-01-01

    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  12. Restructuring the CS 1 classroom: Examining the effect of open laboratory-based classes vs. closed laboratory-based classes on Computer Science 1 students' achievement and attitudes toward computers and computer courses

    NASA Astrophysics Data System (ADS)

    Henderson, Jean Foster

    The purpose of this study was to assess the effect of classroom restructuring involving computer laboratories on student achievement and student attitudes toward computers and computer courses. The effects of the targeted student attributes of gender, previous programming experience, math background, and learning style were also examined. The open lab-based class structure consisted of a traditional lecture class with a separate, unscheduled lab component in which lab assignments were completed outside of class; the closed lab-based class structure integrated a lab component within the lecture class so that half the class was reserved for lecture and half the class was reserved for students to complete lab assignments by working cooperatively with each other and under the supervision and guidance of the instructor. The sample consisted of 71 students enrolled in four intact classes of Computer Science I during the fall and spring semesters of the 2006--2007 school year at two southern universities: two classes were held in the fall (one at each university) and two classes were held in the spring (one at each university). A counterbalanced repeated measures design was used in which all students experienced both class structures for half of each semester. The order of control and treatment was rotated among the four classes. All students received the same amount of class and instructor time. A multivariate analysis of variance (MANOVA) via a multiple regression strategy was used to test the study's hypotheses. Although the overall MANOVA model was statistically significant, independent follow-up univariate analyses relative to each dependent measure found that the only significant research factor was math background: Students whose mathematics background was at the level of Calculus I or higher had significantly higher student achievement than students whose mathematics background was less than Calculus I. The results suggest that classroom structures that

  13. Achieving High Reliability with People, Processes, and Technology.

    PubMed

    Saunders, Candice L; Brennan, John A

    2017-01-01

    High reliability as a corporate value in healthcare can be achieved by meeting the "Quadruple Aim" of improving population health, reducing per capita costs, enhancing the patient experience, and improving provider wellness. This drive starts with the board of trustees, CEO, and other senior leaders who ingrain high reliability throughout the organization. At WellStar Health System, the board developed an ambitious goal to become a top-decile health system in safety and quality metrics. To achieve this goal, WellStar has embarked on a journey toward high reliability and has committed to Lean management practices consistent with the Institute for Healthcare Improvement's definition of a high-reliability organization (HRO): one that is committed to the prevention of failure, early identification and mitigation of failure, and redesign of processes based on identifiable failures. In the end, a successful HRO can provide safe, effective, patient- and family-centered, timely, efficient, and equitable care through a convergence of people, processes, and technology.

  14. Effects of Computer Assisted Learning Instructions on Reading Achievement among Middle School English Language Learners

    ERIC Educational Resources Information Center

    Bayley-Hamlet, Simone O.

    2017-01-01

    The purpose of this study was to examine the effect of Imagine Learning, a computer assisted language learning (CALL) program, on addressing reading achievement for English language learners (ELLs). This is a measurement used in the Accessing Comprehension and Communication in English State-to-State (ACCESS for ELLs or ACCESS) reading scale…

  15. Using a Computer Animation to Teach High School Molecular Biology

    ERIC Educational Resources Information Center

    Rotbain, Yosi; Marbach-Ad, Gili; Stavy, Ruth

    2008-01-01

    We present an active way to use a computer animation in secondary molecular genetics class. For this purpose we developed an activity booklet that helps students to work interactively with a computer animation which deals with abstract concepts and processes in molecular biology. The achievements of the experimental group were compared with those…

  16. The Relationship between Family Functioning and Academic Achievement in Female High School Students of Isfahan, Iran, in 2013-2014.

    PubMed

    Rezaei-Dehaghani, Abdollah; Keshvari, Mahrokh; Paki, Somayeh

    2018-01-01

    Nowadays, the most important problem of the educational system is the vast spread of school failure. Therefore, detection of the factors leading to or preventing students' academic achievement is of utmost importance. Family function is considered to be a critical component of academic success. This study aimed to investigate the relationship between family functioning and academic achievement in high school female students in Isfahan. This descriptive correlational study was conducted through random sampling among 237 female high school students in Isfahan during school year 2013-2014. Data were collected by participants' personal characteristics and Bloom family function questionnaires. To analyze the data, descriptive statistics (mean and standard deviation) and inferential statistics (Pearson correlation and linear regression analysis) were adopted and computed using SPSS software. The results showed a significant correlation between family function (except lack of independence) and students' academic achievement ( p < 0.05). Further, among family function dimensions, expressiveness ( β = 0.235, p < 0.001), family socialization ( β = 0.219, p = 0.001), and cohesion ( β = 0.211, p = 0.001) were more reliable predictors of academic achievement. The results of this study showed that students' academic achievement is highly correlated with the performance of their families. Therefore, to improve students' educational status in cultural and educational programs, which are specified for them, family function centered plans should be at the heart of attention.

  17. Computer simulation of space station computer steered high gain antenna

    NASA Technical Reports Server (NTRS)

    Beach, S. W.

    1973-01-01

    The mathematical modeling and programming of a complete simulation program for a space station computer-steered high gain antenna are described. The program provides for reading input data cards, numerically integrating up to 50 first order differential equations, and monitoring up to 48 variables on printed output and on plots. The program system consists of a high gain antenna, an antenna gimbal control system, an on board computer, and the environment in which all are to operate.

  18. Industrial applications of high-performance computing for phylogeny reconstruction

    NASA Astrophysics Data System (ADS)

    Bader, David A.; Moret, Bernard M.; Vawter, Lisa

    2001-07-01

    Phylogenies (that is, tree-of-life relationships) derived from gene order data may prove crucial in answering some fundamental open questions in biomolecular evolution. Real-world interest is strong in determining these relationships. For example, pharmaceutical companies may use phylogeny reconstruction in drug discovery for discovering synthetic pathways unique to organisms that they wish to target. Health organizations study the phylogenies of organisms such as HIV in order to understand their epidemiologies and to aid in predicting the behaviors of future outbreaks. And governments are interested in aiding the production of such foodstuffs as rice, wheat and potatoes via genetics through understanding of the phylogenetic distribution of genetic variation in wild populations. Yet few techniques are available for difficult phylogenetic reconstruction problems. Appropriate tools for analysis of such data may aid in resolving some of the phylogenetic problems that have been analyzed without much resolution for decades. With the rapid accumulation of whole genome sequences for a wide diversity of taxa, especially microbial taxa, phylogenetic reconstruction based on changes in gene order and gene content is showing promise, particularly for resolving deep (i.e., ancient) branch splits. However, reconstruction from gene-order data is even more computationally expensive than reconstruction from sequence data, particularly in groups with large numbers of genes and highly-rearranged genomes. We have developed a software suite, GRAPPA, that extends the breakpoint analysis (BPAnalysis) method of Sankoff and Blanchette while running much faster: in a recent analysis of chloroplast genome data for species of Campanulaceae on a 512-processor Linux supercluster with Myrinet, we achieved a one-million-fold speedup over BPAnalysis. GRAPPA can use either breakpoint or inversion distance (computed exactly) for its computation and runs on single-processor machines as well as

  19. Comparison of High-Fidelity Computational Tools for Wing Design of a Distributed Electric Propulsion Aircraft

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Derlaga, Joseph M.; Stoll, Alex M.

    2017-01-01

    A variety of tools, from fundamental to high order, have been used to better understand applications of distributed electric propulsion to aid the wing and propulsion system design of the Leading Edge Asynchronous Propulsion Technology (LEAPTech) project and the X-57 Maxwell airplane. Three high-fidelity, Navier-Stokes computational fluid dynamics codes used during the project with results presented here are FUN3D, STAR-CCM+, and OVERFLOW. These codes employ various turbulence models to predict fully turbulent and transitional flow. Results from these codes are compared for two distributed electric propulsion configurations: the wing tested at NASA Armstrong on the Hybrid-Electric Integrated Systems Testbed truck, and the wing designed for the X-57 Maxwell airplane. Results from these computational tools for the high-lift wing tested on the Hybrid-Electric Integrated Systems Testbed truck and the X-57 high-lift wing presented compare reasonably well. The goal of the X-57 wing and distributed electric propulsion system design achieving or exceeding the required ?? (sub L) = 3.95 for stall speed was confirmed with all of the computational codes.

  20. How Do the Different Types of Computer Use Affect Math Achievement?

    ERIC Educational Resources Information Center

    Flores, Raymond; Inan, Fethi; Lin, Zhangxi

    2013-01-01

    In this study, the National Educational Longitudinal Study (ELS:2002) dataset was used and a predictive data mining technique, decision tree analysis, was implemented in order to examine which factors, in conjunction to computer use, can be used to predict high or low probability of success in high school mathematics. Specifically, this study…

  1. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  2. Vocational interests of intellectually gifted and highly achieving young adults.

    PubMed

    Vock, Miriam; Köller, Olaf; Nagy, Gabriel

    2013-06-01

    Vocational interests play a central role in the vocational decision-making process and are decisive for the later job satisfaction and vocational success. Based on Ackerman's (1996) notion of trait complexes, specific interest profiles of gifted high-school graduates can be expected. Vocational interests of gifted and highly achieving adolescents were compared to those of their less intelligent/achieving peers according to Holland's (1997) RIASEC model. Further, the impact of intelligence and achievement on interests were analysed while statistically controlling for potentially influencing variables. Changes in interests over time were investigated. N= 4,694 German students (age: M= 19.5, SD= .80; 54.6% females) participated in the study (TOSCA; Köller, Watermann, Trautwein, & Lüdtke, 2004). Interests were assessed in participants' final year at school and again 2 years later (N= 2,318). Gifted participants reported stronger investigative and realistic interests, but lower social interests than less intelligent participants. Highly achieving participants reported higher investigative and (in wave 2) higher artistic interests. Considerable gender differences were found: gifted girls had a flat interest profile, while gifted boys had pronounced realistic and investigative and low social interests. Multilevel multiple regression analyses predicting interests by intelligence and school achievement revealed stable interest profiles. Beyond a strong gender effect, intelligence and school achievement each contributed substantially to the prediction of vocational interests. At the time around graduation from high school, gifted young adults show stable interest profiles, which strongly differ between gender and intelligence groups. These differences are relevant for programmes for the gifted and for vocational counselling. ©2012 The British Psychological Society.

  3. Evander Childs High School Computer Literacy and Word Processing Skills for Bilingual Students 1984-1985.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn. Office of Educational Assessment.

    In 1984-85, the Computer Literacy and Word Processing Program for Bilingual Students at Evander Childs High School (Bronx, New York) was in the first year of a two-year, Title VII funding cycle. The major goal of the program is to improve the educational achievement and employability skills of 100 Hispanic, limited English proficient (LEP) student…

  4. Mass storage: The key to success in high performance computing

    NASA Technical Reports Server (NTRS)

    Lee, Richard R.

    1993-01-01

    There are numerous High Performance Computing & Communications Initiatives in the world today. All are determined to help solve some 'Grand Challenges' type of problem, but each appears to be dominated by the pursuit of higher and higher levels of CPU performance and interconnection bandwidth as the approach to success, without any regard to the impact of Mass Storage. My colleagues and I at Data Storage Technologies believe that all will have their performance against their goals ultimately measured by their ability to efficiently store and retrieve the 'deluge of data' created by end-users who will be using these systems to solve Scientific Grand Challenges problems, and that the issue of Mass Storage will become then the determinant of success or failure in achieving each projects goals. In today's world of High Performance Computing and Communications (HPCC), the critical path to success in solving problems can only be traveled by designing and implementing Mass Storage Systems capable of storing and manipulating the truly 'massive' amounts of data associated with solving these challenges. Within my presentation I will explore this critical issue and hypothesize solutions to this problem.

  5. High-performance computing for airborne applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even thoughmore » the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.« less

  6. Computer Utilization in Middle Tennessee High Schools.

    ERIC Educational Resources Information Center

    Lucas, Sam

    In order to determine the capacity of high schools to profit from the pre-high school computer experiences of its students, a study was conducted to measure computer utilization in selected high schools of Middle Tennessee. Questionnaires distributed to 50 principals in 28 school systems covered the following areas: school enrollment; number and…

  7. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  8. Asymmetric Core Computing for U.S. Army High-Performance Computing Applications

    DTIC Science & Technology

    2009-04-01

    Playstation 4 (should one be announced). 8 4.2 FPGAs Reconfigurable computing refers to performing computations using Field Programmable Gate Arrays...2008 4 . TITLE AND SUBTITLE Asymmetric Core Computing for U.S. Army High-Performance Computing Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER...Acknowledgments vi  1.  Introduction 1  2.  Relevant Technologies 2  3.  Technical Approach 5  4 .  Research and Development Highlights 7  4.1  Cell

  9. A Computer Based Program to Improve Reading and Mathematics Scores for High School Students.

    ERIC Educational Resources Information Center

    Bond, Carole L.; And Others

    A study examined the effect on reading achievement, mathematics achievement, and ACT scores when computer based instruction (CBI) was compressed into a 6-week period of time. In addition, the effects of learning style and receptive language deficits on these scores were studied. Computer based instruction is a primary source of instruction that…

  10. Computation of Surface Laplacian for tri-polar ring electrodes on high-density realistic geometry head model.

    PubMed

    Junwei Ma; Han Yuan; Sunderam, Sridhar; Besio, Walter; Lei Ding

    2017-07-01

    Neural activity inside the human brain generate electrical signals that can be detected on the scalp. Electroencephalograph (EEG) is one of the most widely utilized techniques helping physicians and researchers to diagnose and understand various brain diseases. Due to its nature, EEG signals have very high temporal resolution but poor spatial resolution. To achieve higher spatial resolution, a novel tri-polar concentric ring electrode (TCRE) has been developed to directly measure Surface Laplacian (SL). The objective of the present study is to accurately calculate SL for TCRE based on a realistic geometry head model. A locally dense mesh was proposed to represent the head surface, where the local dense parts were to match the small structural components in TCRE. Other areas without dense mesh were used for the purpose of reducing computational load. We conducted computer simulations to evaluate the performance of the proposed mesh and evaluated possible numerical errors as compared with a low-density model. Finally, with achieved accuracy, we presented the computed forward lead field of SL for TCRE for the first time in a realistic geometry head model and demonstrated that it has better spatial resolution than computed SL from classic EEG recordings.

  11. HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.

    Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is themore » inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.« less

  12. Near Real-Time Probabilistic Damage Diagnosis Using Surrogate Modeling and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Zubair, Mohammad; Ranjan, Desh

    2017-01-01

    This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.

  13. Experimental Evidence on the Effects of Home Computers on Academic Achievement among Schoolchildren. National Poverty Center Working Paper Series #13-02

    ERIC Educational Resources Information Center

    Fairlie, Robert W.; Robinson, Jonathan

    2013-01-01

    Computers are an important part of modern education, yet large segments of the population--especially low-income and minority children--lack access to a computer at home. Does this impede educational achievement? We test this hypothesis by conducting the largest-ever field experiment involving the random provision of free computers for home use to…

  14. Quantum Accelerators for High-performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less

  15. How the Leaders of One High-Achieving, Large, Urban High School Communicate with Latino Families about Math

    ERIC Educational Resources Information Center

    Kittelson, Andrea

    2016-01-01

    The purpose of this instrumental case study was to understand the ways in which the leaders of one high-achieving, large, urban high school communicate with Latino families about math with the intent to shine a light on the issue of communication with families as it relates to student achievement and the persistent math achievement gap among…

  16. Reliability achievement in high technology space systems

    NASA Technical Reports Server (NTRS)

    Lindstrom, D. L.

    1981-01-01

    The production of failure-free hardware is discussed. The elements required to achieve such hardware are: technical expertise to design, analyze, and fully understand the design; use of high reliability parts and materials control in the manufacturing process; and testing to understand the system and weed out defects. The durability of the Hughes family of satellites is highlighted.

  17. The Role of Principal Leadership in Achievement beyond Test Scores: An Examination of Leadership, Differentiated Curriculum and High-Achieving Students

    ERIC Educational Resources Information Center

    Else, Danielle F.

    2013-01-01

    Though research has validated a link between principal leadership and student achievement, questions remain regarding the specific relationship between the principal and high-achieving learners. This association facilitates understanding about forming curricular decisions for high ability learners. The study was conducted to examine the perceived…

  18. Leveraging the Power of High Performance Computing for Next Generation Sequencing Data Analysis: Tricks and Twists from a High Throughput Exome Workflow

    PubMed Central

    Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438

  19. A Study of Impulsivity in Low-Achieving and High-Achieving Boys from Lower Income Homes. Final Report.

    ERIC Educational Resources Information Center

    Cohen, Shirley

    The purpose of this study was to explore the concept of impulsivity as a stylistic dimension affecting cognitive behavior, and whether impulsivity operates as a comprehensive, inflexible orientation in low achievers more than in high achievers. The Matching Familiar Figures Test, the Porteus Maze Test, and the Stroop Color-Word Test were used to…

  20. An Examination of Achievement Related Behavior of High and Low Achieving Inner City Pupils.

    ERIC Educational Resources Information Center

    Derevensky, Jeffrey L.; And Others

    This study investigated the behavioral differences between high and low achieving students in two Canadian inner city schools. One school consisted predominantly of first generation Portuguese, Greek, and Chinese children, while the other served a predominantly second or third generation population of English speaking Canadians. An academic…

  1. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  2. Biculturalism and Academic Achievement of African American High School Students

    ERIC Educational Resources Information Center

    Rust, Jonathan P.; Jackson, Margo A.; Ponterotto, Joseph G.; Blumberg, Fran C.

    2011-01-01

    Biculturalism was examined as a factor that may positively affect the academic achievement of African American high school students, beyond cultural identity and self-esteem. Hierarchical regression analyses determined that cultural identity and academic self-esteem were important factors for academic achievement, but not biculturalism.…

  3. Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows

    NASA Astrophysics Data System (ADS)

    Crowell, Andrew Rippetoe

    This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two

  4. The Chinese High School Student's Stress in the School and Academic Achievement

    ERIC Educational Resources Information Center

    Liu, Yangyang; Lu, Zuhong

    2011-01-01

    In a sample of 466 Chinese high school students, we examined the relationships between Chinese high school students' stress in the school and their academic achievements. Regression mixture modelling identified two different classes of the effects of Chinese high school students' stress on their academic achievements. One class contained 87% of…

  5. A Comparison of Emotional-Motivational (A-R-D Theory) Personality Characteristics in Learning Disabled, Normal Achieving, and High Achieving Children.

    ERIC Educational Resources Information Center

    Hufano, Linda D.

    The study examined emotional-motivational personality characteristics of 15 learning disabled, 15 normal achieving, and 15 high achieving students (grades 3-5). The study tested the hypothesis derived from the A-R-D (attitude-reinforcer-discriminative) theory of motivation that learning disabled (LD) children differ from normal and high achieving…

  6. Parenting Style, Perfectionism, and Creativity in High-Ability and High-Achieving Young Adults

    ERIC Educational Resources Information Center

    Miller, Angie L.; Lambert, Amber D.; Speirs Neumeister, Kristie L.

    2012-01-01

    The current study explores the potential relationships among perceived parenting style, perfectionism, and creativity in a high-ability and high-achieving young adult population. Using data from 323 honors college students at a Midwestern university, bivariate correlations suggested positive relationships between (a) permissive parenting style and…

  7. A pseudo-discrete algebraic reconstruction technique (PDART) prior image-based suppression of high density artifacts in computed tomography

    NASA Astrophysics Data System (ADS)

    Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong

    2016-12-01

    We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.

  8. What Is the Predict Level of Which Computer Using Skills Measured in PISA for Achievement in Mathematics

    ERIC Educational Resources Information Center

    Ziya, Engin; Dogan, Nuri; Kelecioglu, Hulya

    2010-01-01

    This study aims at determining the extent to which computer using skills specified in Project for International Students Evaluation (PISA) 2006 predict Turkish students' achievement in mathematics. Apart from questions on mathematics, science and reading competencies, a student questionnaire, a school questionnaire and a parent questionnaire were…

  9. Three pillars for achieving quantum mechanical molecular dynamics simulations of huge systems: Divide-and-conquer, density-functional tight-binding, and massively parallel computation.

    PubMed

    Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi

    2016-08-05

    The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Gender, Student Motivation and Academic Achievement in a Midsized Wisconsin High School

    ERIC Educational Resources Information Center

    Lutzke, Steven Ronald

    2013-01-01

    This mixed-methods study investigated relationships among gender, academic motivation and achievement in a mid-sized Wisconsin high school. A questionnaire was developed that focused on perceived ability, achievement motives and achievement goals. Interviews with teachers focused on relationships among academic motivation and gender achievement.…

  11. A methodology for achieving high-speed rates for artificial conductance injection in electrically excitable biological cells.

    PubMed

    Butera, R J; Wilson, C G; Delnegro, C A; Smith, J C

    2001-12-01

    We present a novel approach to implementing the dynamic-clamp protocol (Sharp et al., 1993), commonly used in neurophysiology and cardiac electrophysiology experiments. Our approach is based on real-time extensions to the Linux operating system. Conventional PC-based approaches have typically utilized single-cycle computational rates of 10 kHz or slower. In thispaper, we demonstrate reliable cycle-to-cycle rates as fast as 50 kHz. Our system, which we call model reference current injection (MRCI); pronounced merci is also capable of episodic logging of internal state variables and interactive manipulation of model parameters. The limiting factor in achieving high speeds was not processor speed or model complexity, but cycle jitter inherent in the CPU/motherboard performance. We demonstrate these high speeds and flexibility with two examples: 1) adding action-potential ionic currents to a mammalian neuron under whole-cell patch-clamp and 2) altering a cell's intrinsic dynamics via MRCI while simultaneously coupling it via artificial synapses to an internal computational model cell. These higher rates greatly extend the applicability of this technique to the study of fast electrophysiological currents such fast a currents and fast excitatory/inhibitory synapses.

  12. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  13. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  14. Computational Fluid Dynamics Analysis of High Injection Pressure Blended Biodiesel

    NASA Astrophysics Data System (ADS)

    Khalid, Amir; Jaat, Norrizam; Faisal Hushim, Mohd; Manshoor, Bukhari; Zaman, Izzuddin; Sapit, Azwan; Razali, Azahari

    2017-08-01

    Biodiesel have great potential for substitution with petrol fuel for the purpose of achieving clean energy production and emission reduction. Among the methods that can control the combustion properties, controlling of the fuel injection conditions is one of the successful methods. The purpose of this study is to investigate the effect of high injection pressure of biodiesel blends on spray characteristics using Computational Fluid Dynamics (CFD). Injection pressure was observed at 220 MPa, 250 MPa and 280 MPa. The ambient temperature was kept held at 1050 K and ambient pressure 8 MPa in order to simulate the effect of boost pressure or turbo charger during combustion process. Computational Fluid Dynamics were used to investigate the spray characteristics of biodiesel blends such as spray penetration length, spray angle and mixture formation of fuel-air mixing. The results shows that increases of injection pressure, wider spray angle is produced by biodiesel blends and diesel fuel. The injection pressure strongly affects the mixture formation, characteristics of fuel spray, longer spray penetration length thus promotes the fuel and air mixing.

  15. Lightweight Provenance Service for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Chen, Yong; Carns, Philip

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. Themore » need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.« less

  16. Parallel Computing:. Some Activities in High Energy Physics

    NASA Astrophysics Data System (ADS)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  17. Success Despite Socioeconomics: A Case Study of a High-Achieving, High-Poverty School

    ERIC Educational Resources Information Center

    Tilley, Thomas Brent; Smith, Samuel J.; Claxton, Russell L.

    2012-01-01

    This case study of a high-achieving, high-poverty school describes the school's leadership, culture, and programs that contributed to its success. Data were collected from two surveys (the School Culture Survey and the Vanderbilt Assessment of Leadership in Education), observations at the school site, and interviews with school personnel. The…

  18. Academic Self-Efficacy of High Achieving Students in Mexico

    ERIC Educational Resources Information Center

    Camelo-Lavadores, Ana Karen; Sánchez-Escobedo, Pedro; Pinto-Sosa, Jesus

    2017-01-01

    The purpose of this study was to explore for differences in the academic self-efficacy of Mexican high school students. A gird questionnaire was administered to 1,460 students form private and public schools. As expected, high achieving students showed significantly higher academic self-efficacy that their peers. However, interesting gender…

  19. DVS-SOFTWARE: An Effective Tool for Applying Highly Parallelized Hardware To Computational Geophysics

    NASA Astrophysics Data System (ADS)

    Herrera, I.; Herrera, G. S.

    2015-12-01

    Most geophysical systems are macroscopic physical systems. The behavior prediction of such systems is carried out by means of computational models whose basic models are partial differential equations (PDEs) [1]. Due to the enormous size of the discretized version of such PDEs it is necessary to apply highly parallelized super-computers. For them, at present, the most efficient software is based on non-overlapping domain decomposition methods (DDM). However, a limiting feature of the present state-of-the-art techniques is due to the kind of discretizations used in them. Recently, I. Herrera and co-workers using 'non-overlapping discretizations' have produced the DVS-Software which overcomes this limitation [2]. The DVS-software can be applied to a great variety of geophysical problems and achieves very high parallel efficiencies (90%, or so [3]). It is therefore very suitable for effectively applying the most advanced parallel supercomputers available at present. In a parallel talk, in this AGU Fall Meeting, Graciela Herrera Z. will present how this software is being applied to advance MOD-FLOW. Key Words: Parallel Software for Geophysics, High Performance Computing, HPC, Parallel Computing, Domain Decomposition Methods (DDM)REFERENCES [1]. Herrera Ismael and George F. Pinder, Mathematical Modelling in Science and Engineering: An axiomatic approach", John Wiley, 243p., 2012. [2]. Herrera, I., de la Cruz L.M. and Rosas-Medina A. "Non Overlapping Discretization Methods for Partial, Differential Equations". NUMER METH PART D E, 30: 1427-1454, 2014, DOI 10.1002/num 21852. (Open source) [3]. Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)

  20. High-Degree Neurons Feed Cortical Computations

    PubMed Central

    Timme, Nicholas M.; Ito, Shinya; Shimono, Masanori; Yeh, Fang-Chin; Litke, Alan M.; Beggs, John M.

    2016-01-01

    Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron

  1. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    NASA Technical Reports Server (NTRS)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  2. Game-Based Practice versus Traditional Practice in Computer-Based Writing Strategy Training: Effects on Motivation and Achievement

    ERIC Educational Resources Information Center

    Proske, Antje; Roscoe, Rod D.; McNamara, Danielle S.

    2014-01-01

    Achieving sustained student engagement with practice in computer-based writing strategy training can be a challenge. One potential solution is to foster engagement by embedding practice in educational games; yet there is currently little research comparing the effectiveness of game-based practice versus more traditional forms of practice. In this…

  3. Computational approaches to computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The various techniques by which the goal of computational aeroacoustics (the calculation and noise prediction of a fluctuating fluid flow) may be achieved are reviewed. The governing equations for compressible fluid flow are presented. The direct numerical simulation approach is shown to be computationally intensive for high Reynolds number viscous flows. Therefore, other approaches, such as the acoustic analogy, vortex models and various perturbation techniques that aim to break the analysis into a viscous part and an acoustic part are presented. The choice of the approach is shown to be problem dependent.

  4. International note: between-domain relations of Chinese high school students' academic achievements.

    PubMed

    Yangyang, Liu

    2012-08-01

    The present study examined the between-domain relations of Chinese high school students' academic achievements. In a sample of 1870 Chinese 10th grade students, the results indicated that Chinese high school students' academic achievements were correlated across nine subjects. In line with the previous Western findings, the findings suggested that academic achievement was largely domain-general in nature. Copyright © 2012 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  5. High Productivity Computing Systems and Competitiveness Initiative

    DTIC Science & Technology

    2007-07-01

    planning committee for the annual, international Supercomputing Conference in 2004 and 2005. This is the leading HPC industry conference in the world. It...sector partnerships. Partnerships will form a key part of discussions at the 2nd High Performance Computing Users Conference, planned for July 13, 2005...other things an interagency roadmap for high-end computing core technologies and an accessibility improvement plan . Improving HPC Education and

  6. Threatened and Placed at Risk: High Achieving African American Males in Urban High Schools

    ERIC Educational Resources Information Center

    McGee, Ebony O.

    2013-01-01

    This study investigated the risk and protective factors of 11 high-achieving African American males attending 4 urban charter high schools in a Midwestern city to determine what factors account for their resilience and success in mathematics courses, and in high school more generally. This research was guided by a Phenomenological Variant of…

  7. The Impact of Formative Assessment on Students in a High Achieving Middle School

    ERIC Educational Resources Information Center

    Toungette, William Thomas

    2012-01-01

    With the passage of the No Child Left Behind mandate, school systems clamored to ensure that all students showed academic growth. For schools with a high-achieving population, this could be a daunting task. This analysis examined the impact formative assessment had on student achievement in a high-achieving, middle school by measuring three…

  8. Multicore Challenges and Benefits for High Performance Scientific Computing

    DOE PAGES

    Nielsen, Ida M. B.; Janssen, Curtis L.

    2008-01-01

    Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

  9. GPU-based High-Performance Computing for Radiation Therapy

    PubMed Central

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B.

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. Graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past a few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of studies have been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this article, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. PMID:24486639

  10. Software Systems for High-performance Quantum Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; Britt, Keith A

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventionalmore » computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.« less

  11. Brain Hemisphericity and Mathematics Achievement of High School Students

    ERIC Educational Resources Information Center

    Fernandez, Sanny F.

    2011-01-01

    This study aimed to find out the brain hemisphericity and mathematics achievement of high school students. The respondents of the study were the 168 first year high school students of Colegio de San Jose, during school year 2010-2011 who were chosen through stratified random sampling. The descriptive and interview methods of research were used in…

  12. Influence of Computer-Assisted Roundhouse Diagrams on High School 9th Grade Students' Understanding the Subjects of "Force and Motion"

    ERIC Educational Resources Information Center

    Kocakaya, F.; Gönen, S.

    2014-01-01

    Main aim of this study is to examine the influence of computer-assisted roundhouse diagrams on high school 9th grade students' academic achievements in the subjects of "Force and Motion". The study was carried out in a public high school in Diyarbakir the province in the Southeast of Turkey. In the study, the "pre-test-post-test…

  13. Significantly reducing the processing times of high-speed photometry data sets using a distributed computing model

    NASA Astrophysics Data System (ADS)

    Doyle, Paul; Mtenzi, Fred; Smith, Niall; Collins, Adrian; O'Shea, Brendan

    2012-09-01

    The scientific community is in the midst of a data analysis crisis. The increasing capacity of scientific CCD instrumentation and their falling costs is contributing to an explosive generation of raw photometric data. This data must go through a process of cleaning and reduction before it can be used for high precision photometric analysis. Many existing data processing pipelines either assume a relatively small dataset or are batch processed by a High Performance Computing centre. A radical overhaul of these processing pipelines is required to allow reduction and cleaning rates to process terabyte sized datasets at near capture rates using an elastic processing architecture. The ability to access computing resources and to allow them to grow and shrink as demand fluctuates is essential, as is exploiting the parallel nature of the datasets. A distributed data processing pipeline is required. It should incorporate lossless data compression, allow for data segmentation and support processing of data segments in parallel. Academic institutes can collaborate and provide an elastic computing model without the requirement for large centralized high performance computing data centers. This paper demonstrates how a base 10 order of magnitude improvement in overall processing time has been achieved using the "ACN pipeline", a distributed pipeline spanning multiple academic institutes.

  14. Electromagnetic Modeling of Human Body Using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada

    Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.

  15. High-Performance Computing Data Center Warm-Water Liquid Cooling |

    Science.gov Websites

    Computational Science | NREL Warm-Water Liquid Cooling High-Performance Computing Data Center Warm-Water Liquid Cooling NREL's High-Performance Computing Data Center (HPC Data Center) is liquid water Liquid cooling technologies offer a more energy-efficient solution that also allows for effective

  16. Safety of High Speed Ground Transportation Systems : Analytical Methodology for Safety Validation of Computer Controlled Subsystems : Volume 2. Development of a Safety Validation Methodology

    DOT National Transportation Integrated Search

    1995-01-01

    This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety cortical functions in high-speed rail or magnetic levitation ...

  17. The Influence of Using Momentum and Impulse Computer Simulation to Senior High School Students’ Concept Mastery

    NASA Astrophysics Data System (ADS)

    Kaniawati, I.; Samsudin, A.; Hasopa, Y.; Sutrisno, A. D.; Suhendi, E.

    2016-08-01

    This research is based on students’ lack of mastery of physics abstract concepts. Thus, this study aims to improve senior high school students’ mastery of momentum and impulse concepts with the use of computer simulation. To achieve these objectives, the research method employed was pre experimental design with one group pre-test post-test. A total of 36 science students of grade 11 in one of public senior high school in Bandung became the sample in this study. The instruments utilized to determine the increase of students’ concept mastery were pretest and posttest in the form of multiple choices. After using computer simulations in physics learning, students’ mastery of momentum and impulse concept has increased as indicated by the normalized gain of 0.64 with the medium category.

  18. Data Storage and Transfer | High-Performance Computing | NREL

    Science.gov Websites

    High-Performance Computing (HPC) systems. Photo of computer server wiring and lights, blurred to show data. WinSCP for Windows File Transfers Use to transfer files from a local computer to a remote computer. Robinhood for File Management Use this tool to manage your data files on Peregrine. Best

  19. Computer Proficiency Questionnaire: Assessing Low and High Computer Proficient Seniors

    PubMed Central

    Boot, Walter R.; Charness, Neil; Czaja, Sara J.; Sharit, Joseph; Rogers, Wendy A.; Fisk, Arthur D.; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-01-01

    Purpose of the Study: Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. Design and Methods: To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. Results: The CPQ demonstrated excellent reliability (Cronbach’s α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. Implications: The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. PMID:24107443

  20. Professional Competences of Teachers for Fostering Creativity and Supporting High-Achieving Students

    ERIC Educational Resources Information Center

    Hoth, Jessica; Kaiser, Gabriele; Busse, Andreas; Döhrmann, Martina; König, Johannes; Blömeke, Sigrid

    2017-01-01

    This paper addresses an important task teachers face in class: the identification and support of creative and high-achieving students. In particular, we examine whether primary teachers (1) have acquired professional knowledge during teacher education that is necessary to foster creativity and to teach high-achieving students, and whether they (2)…

  1. Advanced Computational Modeling of Vapor Deposition in a High-Pressure Reactor

    NASA Technical Reports Server (NTRS)

    Cardelino, Beatriz H.; Moore, Craig E.; McCall, Sonya D.; Cardelino, Carlos A.; Dietz, Nikolaus; Bachmann, Klaus

    2004-01-01

    In search of novel approaches to produce new materials for electro-optic technologies, advances have been achieved in the development of computer models for vapor deposition reactors in space. Numerical simulations are invaluable tools for costly and difficult processes, such as those experiments designed for high pressures and microgravity conditions. Indium nitride is a candidate compound for high-speed laser and photo diodes for optical communication system, as well as for semiconductor lasers operating into the blue and ultraviolet regions. But InN and other nitride compounds exhibit large thermal decomposition at its optimum growth temperature. In addition, epitaxy at lower temperatures and subatmospheric pressures incorporates indium droplets into the InN films. However, surface stabilization data indicate that InN could be grown at 900 K in high nitrogen pressures, and microgravity could provide laminar flow conditions. Numerical models for chemical vapor deposition have been developed, coupling complex chemical kinetics with fluid dynamic properties.

  2. Advanced Computational Modeling of Vapor Deposition in a High-pressure Reactor

    NASA Technical Reports Server (NTRS)

    Cardelino, Beatriz H.; Moore, Craig E.; McCall, Sonya D.; Cardelino, Carlos A.; Dietz, Nikolaus; Bachmann, Klaus

    2004-01-01

    In search of novel approaches to produce new materials for electro-optic technologies, advances have been achieved in the development of computer models for vapor deposition reactors in space. Numerical simulations are invaluable tools for costly and difficult processes, such as those experiments designed for high pressures and microgravity conditions. Indium nitride is a candidate compound for high-speed laser and photo diodes for optical communication system, as well as for semiconductor lasers operating into the blue and ultraviolet regions. But InN and other nitride compounds exhibit large thermal decomposition at its optimum growth temperature. In addition, epitaxy at lower temperatures and subatmospheric pressures incorporates indium droplets into the InN films. However, surface stabilization data indicate that InN could be grown at 900 K in high nitrogen pressures, and microgravity could provide laminar flow conditions. Numerical models for chemical vapor deposition have been developed, coupling complex chemical kinetics with fluid dynamic properties.

  3. Welcome to the NASA High Performance Computing and Communications Computational Aerosciences (CAS) Workshop 2000

    NASA Technical Reports Server (NTRS)

    Schulbach, Catherine H. (Editor)

    2000-01-01

    The purpose of the CAS workshop is to bring together NASA's scientists and engineers and their counterparts in industry, other government agencies, and academia working in the Computational Aerosciences and related fields. This workshop is part of the technology transfer plan of the NASA High Performance Computing and Communications (HPCC) Program. Specific objectives of the CAS workshop are to: (1) communicate the goals and objectives of HPCC and CAS, (2) promote and disseminate CAS technology within the appropriate technical communities, including NASA, industry, academia, and other government labs, (3) help promote synergy among CAS and other HPCC scientists, and (4) permit feedback from peer researchers on issues facing High Performance Computing in general and the CAS project in particular. This year we had a number of exciting presentations in the traditional aeronautics, aerospace sciences, and high-end computing areas and in the less familiar (to many of us affiliated with CAS) earth science, space science, and revolutionary computing areas. Presentations of more than 40 high quality papers were organized into ten sessions and presented over the three-day workshop. The proceedings are organized here for easy access: by author, title and topic.

  4. Best Practices for Achieving High, Rapid Reading Gains

    ERIC Educational Resources Information Center

    Carbo, Marie

    2008-01-01

    The percentage of students who read at the proficient level on the National Assessment of Educational Progress (NAEP) has not improved, and is appallingly low. In order for students to achieve high reading gains and become life-long readers, reading comprehension and reading enjoyment must be the top two goals. This article presents several…

  5. Bullying and Victimization Rates among Gifted and High-Achieving Students

    ERIC Educational Resources Information Center

    Peters, Megan Parker; Bain, Sherry K.

    2011-01-01

    Bullying and victimization rates among 90 gifted and nongifted, high-achieving (HA) high school students were assessed by using the Reynolds Bully Victimization Scale (BVS; W. M. Reynolds, 2003). The mean scores indicate that gifted and HA high school students bully others and are victimized by others generally at unelevated rates based on BVS…

  6. High-performance scientific computing in the cloud

    NASA Astrophysics Data System (ADS)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  7. Hybrid parallel computing architecture for multiview phase shifting

    NASA Astrophysics Data System (ADS)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  8. The Effect of a One to One Laptop Initiative on High School Math Achievement in a Suburban High School Environment

    ERIC Educational Resources Information Center

    Heap, Bryan

    2018-01-01

    Technology continues to advance the pace of American education. Each year school districts across the country invest resources into computers, software, technology specialists, and staff development. The stated goal given to stakeholders is usually to increase student achievement, increase motivation, or to better prepare students for the future.…

  9. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    PubMed

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Computational challenges, tools and resources for analyzing co- and post-transcriptional events in high throughput

    PubMed Central

    Bahrami-Samani, Emad; Vo, Dat T.; de Araujo, Patricia Rosa; Vogel, Christine; Smith, Andrew D.; Penalva, Luiz O. F.; Uren, Philip J.

    2014-01-01

    Co- and post-transcriptional regulation of gene expression is complex and multi-faceted, spanning the complete RNA lifecycle from genesis to decay. High-throughput profiling of the constituent events and processes is achieved through a range of technologies that continue to expand and evolve. Fully leveraging the resulting data is non-trivial, and requires the use of computational methods and tools carefully crafted for specific data sources and often intended to probe particular biological processes. Drawing upon databases of information pre-compiled by other researchers can further elevate analyses. Within this review, we describe the major co- and post-transcriptional events in the RNA lifecycle that are amenable to high-throughput profiling. We place specific emphasis on the analysis of the resulting data, in particular the computational tools and resources available, as well as looking towards future challenges that remain to be addressed. PMID:25515586

  11. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high

  12. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  13. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  14. Hyperswitch Communication Network Computer

    NASA Technical Reports Server (NTRS)

    Peterson, John C.; Chow, Edward T.; Priel, Moshe; Upchurch, Edwin T.

    1993-01-01

    Hyperswitch Communications Network (HCN) computer is prototype multiple-processor computer being developed. Incorporates improved version of hyperswitch communication network described in "Hyperswitch Network For Hypercube Computer" (NPO-16905). Designed to support high-level software and expansion of itself. HCN computer is message-passing, multiple-instruction/multiple-data computer offering significant advantages over older single-processor and bus-based multiple-processor computers, with respect to price/performance ratio, reliability, availability, and manufacturing. Design of HCN operating-system software provides flexible computing environment accommodating both parallel and distributed processing. Also achieves balance among following competing factors; performance in processing and communications, ease of use, and tolerance of (and recovery from) faults.

  15. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    ERIC Educational Resources Information Center

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  16. 2×2 dominant achievement goal profiles in high-level swimmers.

    PubMed

    Fernandez-Rio, Javier; Cecchini Estrada, Jose A; Mendez-Giménez, Antonio; Fernández-Garcia, Benjamín; Saavedra, Pablo

    2014-01-01

    The goal of this study was to assess achievement goal dominance, self-determined situational motivation and competence in high-level swimmers before and after three training sessions set at different working intensities (medium, sub-maximal and maximal). Nineteen athletes (males, n=9, 18.00±2.32 years; females, n=10, 16.30±2.01 years, range = 14-18) agreed to participate. They completed a questionnaire that included the Dominant Achievement Goal assessment instrument, the 2×2 Achievement Goals Questionnaire for Sport (AGQ-S), The Situational Motivation Scale (SIMS) and the Competence subscale of the Basic Psychological Needs in Exercise questionnaire (BPNES). Results indicated that participants overwhelmingly showed mastery-approach achievement goal dominance, and it remained stable at the conclusion of the different training sessions under all intensity levels. This profile was positively correlated to self-determined situational motivation and competence. However, swimmers' feelings of competence increased only after the medium intensity level training session. After the completion of the maximal intensity training session, swimmers' self-determined motivation was significantly lower compared to the other two training sessions, which could be caused by a temporary period of burnout. Results indicated that high-level swimmers had a distinct mastery-approach dominant achievement goal profile that was not affected by the workload of the different training sessions. They also showed high levels of self-determined situational motivation and competence. However, heavy workloads should be controlled because they can cause transitory burnout.

  17. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  18. Predictors of Enrollment in High School Computer Courses.

    ERIC Educational Resources Information Center

    Campbell, N. Jo; Perry, Katye M.

    Factors affecting the motivation of high school students to learn to use computers were examined in this study. The subjects were 160 students enrolled in a large city high school, 89 females and 71 males who represented five ethnic groups--White, Black, Hispanic, Asian, and American Indian. The majority of subjects had prior computer coursework…

  19. One-to-One Laptop Programs: Do Students in Identified Illinois High Schools Have an Advantage When State Assessments Are Computer-Based?

    ERIC Educational Resources Information Center

    Bleyer, Charles T.

    2017-01-01

    The purpose of this study was to determine if students in identified Illinois high schools who were a part of a one-to-one (1:1) laptop program achieved higher results on the computer-based Partnership for the Assessment of Readiness for College and Careers (PARCC) assessment than students in identified Illinois high schools that did not…

  20. When high achievers and low achievers work in the same group: the roles of group heterogeneity and processes in project-based learning.

    PubMed

    Cheng, Rebecca Wing-yi; Lam, Shui-fong; Chan, Joanne Chung-yan

    2008-06-01

    There has been an ongoing debate about the inconsistent effects of heterogeneous ability grouping on students in small group work such as project-based learning. The present research investigated the roles of group heterogeneity and processes in project-based learning. At the student level, we examined the interaction effect between students' within-group achievement and group processes on their self- and collective efficacy. At the group level, we examined how group heterogeneity was associated with the average self- and collective efficacy reported by the groups. The participants were 1,921 Hong Kong secondary students in 367 project-based learning groups. Student achievement was determined by school examination marks. Group processes, self-efficacy and collective efficacy were measured by a student-report questionnaire. Hierarchical linear modelling was used to analyse the nested data. When individual students in each group were taken as the unit of analysis, results indicated an interaction effect of group processes and students' within-group achievement on the discrepancy between collective- and self-efficacy. When compared with low achievers, high achievers reported lower collective efficacy than self-efficacy when group processes were of low quality. However, both low and high achievers reported higher collective efficacy than self-efficacy when group processes were of high quality. With 367 groups taken as the unit of analysis, the results showed that group heterogeneity, group gender composition and group size were not related to the discrepancy between collective- and self-efficacy reported by the students. Group heterogeneity was not a determinant factor in students' learning efficacy. Instead, the quality of group processes played a pivotal role because both high and low achievers were able to benefit when group processes were of high quality.

  1. High Stakes for High Achievers: State Accountability in the Age of ESSA

    ERIC Educational Resources Information Center

    Petrilli, Michael J.; Griffith, David; Wright, Brandon L.; Kim, Audrey

    2016-01-01

    In this report, the authors examine the extent to which states' current (or planned) accountability systems for elementary and middle schools attend to the needs of high-achieving students, and how these systems might be redesigned under the Every Student Succeeds Act (ESSA) to better serve all students. In their view, states can and should take…

  2. System Resource Allocations | High-Performance Computing | NREL

    Science.gov Websites

    Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation

  3. Home Media and Children’s Achievement and Behavior

    PubMed Central

    Hofferth, Sandra L.

    2010-01-01

    This study provides a national picture of the time American 6–12 year olds spent playing video games, using the computer, and watching television at home in 1997 and 2003 and the association of early use with their achievement and behavior as adolescents. Girls benefited from computers more than boys and Black children’s achievement benefited more from greater computer use than did that of White children. Greater computer use in middle childhood was associated with increased achievement for White and Black girls and Black boys, but not White boys. Greater computer play was also associated with a lower risk of becoming socially isolated among girls. Computer use does not crowd out positive learning-related activities, whereas video game playing does. Consequently, increased video game play had both positive and negative associations with the achievement of girls but not boys. For boys, increased video game play was linked to increased aggressive behavior problems. PMID:20840243

  4. High-Order Methods for Computational Physics

    DTIC Science & Technology

    1999-03-01

    computation is running in 278 Ronald D. Henderson parallel. Instead we use the concept of a voxel database (VDB) of geometric positions in the mesh [85...processor 0 Fig. 4.19. Connectivity and communications axe established by building a voxel database (VDB) of positions. A VDB maps each position to a...studies such as the highly accurate stability computations considered help expand the database for this benchmark problem. The two-dimensional linear

  5. Onward to Petaflops Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    With programs such as the US High Performance Computing and Communications Program (HPCCP), the attention of scientists and engineers worldwide has been focused on the potential of very high performance scientific computing, namely systems that are hundreds or thousands of times more powerful than those typically available in desktop systems at any given point in time. Extending the frontiers of computing in this manner has resulted in remarkable advances, both in computing technology itself and also in the various scientific and engineering disciplines that utilize these systems. Within the month or two, a sustained rate of 1 Tflop/s (also written 1 teraflops, or 10(exp 12) floating-point operations per second) is likely to be achieved by the 'ASCI Red' system at Sandia National Laboratory in New Mexico. With this objective in sight, it is reasonable to ask what lies ahead for high-end computing.

  6. Practically Perfect in Every Way: Can Reframing Perfectionism for High-Achieving Undergraduates Impact Academic Resilience?

    ERIC Educational Resources Information Center

    Dickinson, Mary J.; Dickinson, David A. G.

    2015-01-01

    This study focuses on a pan-disciplinary scheme that targeted high-achieving undergraduate students. Earlier research from the scheme argued that high achievers have discernibly different learning and personal development support needs. One of the most frequent self-reported challenges within this high-achieving group is perfectionism. This…

  7. The Relationship between Parental Involvement and Student Achievement in a Rural Florida High School

    ERIC Educational Resources Information Center

    Jackson, Willie A.

    2011-01-01

    Parental involvement is viewed as critical to the development of effective schools and student achievement. The relationship between parental involvement and achievement test scores at a rural high school in Florida was not known. This high school has not met the state standards as determined by the Florida Comprehensive Achievement Test (FCAT)…

  8. A Look at the Impact of High-End Computing Technologies on NASA Missions

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart

    2012-01-01

    From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.

  9. Relationships among Stress, Coping, and Mental Health in High-Achieving High School Students

    ERIC Educational Resources Information Center

    Suldo, Shannon M.; Shaunessy, Elizabeth; Hardesty, Robin

    2008-01-01

    This study investigates the relationships among stress, coping, and mental health in 139 students participating in an International Baccalaureate (IB) high school diploma program. Mental health was assessed using both positive indicators (life satisfaction, academic achievement, academic self-efficacy) and negative indicators (psychopathology) of…

  10. The Effect of Computer-Assisted Cooperative Learning Methods and Group Size on the EFL Learners' Achievement in Communication Skills

    ERIC Educational Resources Information Center

    AbuSeileek, Ali Farhan

    2012-01-01

    This study explored the effect of cooperative learning small group size and two different instructional modes (positive interdependence vs. individual accountability) on English as a Foreign Language (EFL) undergraduate learners' communication skills (speaking and writing) achievement in computer-based environments. The study also examined the…

  11. Expanding Opportunities for High Academic Achievement: An International Baccalaureate Diploma Program in an Urban High School

    ERIC Educational Resources Information Center

    Mayer, Anysia P.

    2008-01-01

    Students of color are consistently underrepresented in honors and gifted programs nationwide, and even high-achieving students share many of the risk factors with their low-achieving peers. The study presented in this paper employed mixed methods to investigate the relationship between the design of a rigorous college preparatory program, the…

  12. The effects of computer simulation versus hands-on dissection and the placement of computer simulation within the learning cycle on student achievement and attitude

    NASA Astrophysics Data System (ADS)

    Hopkins, Kathryn Susan

    The value of dissection as an instructional strategy has been debated, but not evidenced in research literature. The purpose of this study was to examine the efficacy of using computer simulated frog dissection as a substitute for traditional hands-on frog dissection and to examine the possible enhancement of achievement by combining the two strategies in a specific sequence. In this study, 134 biology students at two Central Texas schools were divided into the five following treatment groups: computer simulation of frog dissection, computer simulation before dissection, traditional hands-on frog dissection, dissection before computer simulation, and textual worksheet materials. The effects on achievement were evaluated by labeling 10 structures on three diagrams, identifying 11 pinned structures on a prosected frog, and answering 9 multiple-choice questions over the dissection process. Attitude was evaluated using a thirty item survey with a five-point Likert scale. The quasi-experimental design was pretest/post-test/post-test nonequivalent group for both control and experimental groups, a 2 x 2 x 5 completely randomized factorial design (gender, school, five treatments). The pretest/post-test design was incorporated to control for prior knowledge using analysis of covariance. The dissection only group evidenced a significantly higher performance than all other treatments except dissection-then-computer on the post-test segment requiring students to label pinned anatomical parts on a prosected frog. Interactions between treatment and school in addition to interaction between treatment and gender were found to be significant. The diagram and attitude post-tests evidenced no significant difference. Results on the nine multiple-choice questions about dissection procedures indicated a significant difference between schools. The interaction between treatment and school was also found to be significant. On a delayed post-test, a significant difference in gender was

  13. Achieving high aspect ratio wrinkles by modifying material network stress.

    PubMed

    Chen, Yu-Cheng; Wang, Yan; McCarthy, Thomas J; Crosby, Alfred J

    2017-06-07

    Wrinkle aspect ratio, or the amplitude divided by the wavelength, is hindered by strain localization transitions when an increasing global compressive stress is applied to synthetic material systems. However, many examples from living organisms show extremely high aspect ratios, such as gut villi and flower petals. We use three experimental approaches to demonstrate that these high aspect ratio structures can be achieved by modifying the network stress in the wrinkle substrate. We modify the wrinkle stress and effectively delay the strain localization transition, such as folding, to larger aspect ratios by using a zero-stress initial wavy substrate, creating a secondary network with post-curing, or using chemical stress relaxation materials. A wrinkle aspect ratio as high as 0.85, almost three times higher than common values of synthetic wrinkles, is achieved, and a quantitative framework is presented to provide understanding the different strategies and predictions for future investigations.

  14. Computation of Dielectric Response in Molecular Solids for High Capacitance Organic Dielectrics.

    PubMed

    Heitzer, Henry M; Marks, Tobin J; Ratner, Mark A

    2016-09-20

    The dielectric response of a material is central to numerous processes spanning the fields of chemistry, materials science, biology, and physics. Despite this broad importance across these disciplines, describing the dielectric environment of a molecular system at the level of first-principles theory and computation remains a great challenge and is of importance to understand the behavior of existing systems as well as to guide the design and synthetic realization of new ones. Furthermore, with recent advances in molecular electronics, nanotechnology, and molecular biology, it has become necessary to predict the dielectric properties of molecular systems that are often difficult or impossible to measure experimentally. In these scenarios, it is would be highly desirable to be able to determine dielectric response through efficient, accurate, and chemically informative calculations. A good example of where theoretical modeling of dielectric response would be valuable is in the development of high-capacitance organic gate dielectrics for unconventional electronics such as those that could be fabricated by high-throughput printing techniques. Gate dielectrics are fundamental components of all transistor-based logic circuitry, and the combination high dielectric constant and nanoscopic thickness (i.e., high capacitance) is essential to achieving high switching speeds and low power consumption. Molecule-based dielectrics offer the promise of cheap, flexible, and mass producible electronics when used in conjunction with unconventional organic or inorganic semiconducting materials to fabricate organic field effect transistors (OFETs). The molecular dielectrics developed to date typically have limited dielectric response, which results in low capacitances, translating into poor performance of the resulting OFETs. Furthermore, the development of better performing dielectric materials has been hindered by the current highly empirical and labor-intensive pace of synthetic

  15. The Effects of Various High School Scheduling Models on Student Achievement in Michigan

    ERIC Educational Resources Information Center

    Pickell, Russell E.

    2017-01-01

    This study reviews research and data to determine whether student achievement is affected by the high school scheduling model, and whether changes in scheduling models result in statistically significant changes in student achievement, as measured by the ACT Composite, ACT English Language Arts, and ACT Math scores. The high school scheduling…

  16. Mobile high-performance computing (HPC) for synthetic aperture radar signal processing

    NASA Astrophysics Data System (ADS)

    Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen

    2018-04-01

    The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.

  17. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  18. High-efficiency-release targets for use at ISOL facilities: computational design

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Alton, G. D.

    1999-12-01

    This report describes efforts made at the Oak Ridge National Laboratory to design high-efficiency-release targets that simultaneously incorporate the short diffusion lengths, high permeabilities, controllable temperatures, and heat-removal properties required for the generation of useful radioactive ion beam (RIB) intensities for nuclear physics and astrophysics research using the isotope separation on-line (ISOL) technique. Short diffusion lengths are achieved either by using thin fibrous target materials or by coating thin layers of selected target material onto low-density carbon fibers such as reticulated-vitreous-carbon fiber (RVCF) or carbon-bonded-carbon fiber (CBCF) to form highly permeable composite target matrices. Computational studies that simulate the generation and removal of primary beam deposited heat from target materials have been conducted to optimize the design of target/heat-sink systems for generating RIBs. The results derived from diffusion release-rate simulation studies for selected targets and thermal analyses of temperature distributions within a prototype target/heat-sink system subjected to primary ion beam irradiation are presented in this report.

  19. Antecedents to High Educational Achievement Among Southwestern Mexican Americans.

    ERIC Educational Resources Information Center

    Amodeo, Luiza B.; Martin, Jeanette

    The study examined antecedents to high educational achievement of 42 selected Mexican Americans (university professors, third-year law students, and third- and fourth-year medical students) in 5 southwestern universities (4 in California and 1 in New Mexico). Two related considerations prompted the investigation: failure of many Mexican Americans…

  20. The Center for Computational Biology: resources, achievements, and challenges

    PubMed Central

    Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2011-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains. PMID:22081221

  1. The Center for Computational Biology: resources, achievements, and challenges.

    PubMed

    Toga, Arthur W; Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2012-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains.

  2. Achieving realistic performance and decison-making capabilities in computer-generated air forces

    NASA Astrophysics Data System (ADS)

    Banks, Sheila B.; Stytz, Martin R.; Santos, Eugene, Jr.; Zurita, Vincent B.; Benslay, James L., Jr.

    1997-07-01

    For a computer-generated force (CGF) system to be useful in training environments, it must be able to operate at multiple skill levels, exhibit competency at assigned missions, and comply with current doctrine. Because of the rapid rate of change in distributed interactive simulation (DIS) and the expanding set of performance objectives for any computer- generated force, the system must also be modifiable at reasonable cost and incorporate mechanisms for learning. Therefore, CGF applications must have adaptable decision mechanisms and behaviors and perform automated incorporation of past reasoning and experience into its decision process. The CGF must also possess multiple skill levels for classes of entities, gracefully degrade its reasoning capability in response to system stress, possess an expandable modular knowledge structure, and perform adaptive mission planning. Furthermore, correctly performing individual entity behaviors is not sufficient. Issues related to complex inter-entity behavioral interactions, such as the need to maintain formation and share information, must also be considered. The CGF must also be able to acceptably respond to unforeseen circumstances and be able to make decisions in spite of uncertain information. Because of the need for increased complexity in the virtual battlespace, the CGF should exhibit complex, realistic behavior patterns within the battlespace. To achieve these necessary capabilities, an extensible software architecture, an expandable knowledge base, and an adaptable decision making mechanism are required. Our lab has addressed these issues in detail. The resulting DIS-compliant system is called the automated wingman (AW). The AW is based on fuzzy logic, the common object database (CODB) software architecture, and a hierarchical knowledge structure. We describe the techniques we used to enable us to make progress toward a CGF entity that satisfies the requirements presented above. We present our design and

  3. High-Achieving and Average Students' Reading Growth: Contrasting School and Summer Trajectories

    ERIC Educational Resources Information Center

    Rambo-Hernandez, Karen E.; McCoach, D. Betsy

    2015-01-01

    Much is unknown about how initially high-achieving students grow academically, especially given the measurement issues inherent in assessing growth for the highest performing students. This study compared initially high-achieving and average students' growth in reading (in a cohort of third-grade students from 2,000 schools) over 3 years.…

  4. Challenges of Future High-End Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David; Kutler, Paul (Technical Monitor)

    1998-01-01

    The next major milestone in high performance computing is a sustained rate of one Pflop/s (also written one petaflops, or 10(circumflex)15 floating-point operations per second). In addition to prodigiously high computational performance, such systems must of necessity feature very large main memories, as well as comparably high I/O bandwidth and huge mass storage facilities. The current consensus of scientists who have studied these issues is that "affordable" petaflops systems may be feasible by the year 2010, assuming that certain key technologies continue to progress at current rates. One important question is whether applications can be structured to perform efficiently on such systems, which are expected to incorporate many thousands of processors and deeply hierarchical memory systems. To answer these questions, advanced performance modeling techniques, including simulation of future architectures and applications, may be required. It may also be necessary to formulate "latency tolerant algorithms" and other completely new algorithmic approaches for certain applications. This talk will give an overview of these challenges.

  5. Achievementrap: How America is Failing Millions of High-Achieving Students from Lower-Income Families

    ERIC Educational Resources Information Center

    Wyner, Joshua S.; Bridgeland, John M.; DiIulio, John J., Jr.

    2007-01-01

    This report chronicles the experiences of high-achieving lower-income students during elementary school, high school, college, and graduate school. Millions of high-achieving lower-income students are found in urban, suburban, and rural communities all across America, reflecting the racial, ethnic, and gender composition of the nation's schools,…

  6. Computational Thermodynamics and Kinetics-Based ICME Framework for High-Temperature Shape Memory Alloys

    NASA Astrophysics Data System (ADS)

    Arróyave, Raymundo; Talapatra, Anjana; Johnson, Luke; Singh, Navdeep; Ma, Ji; Karaman, Ibrahim

    2015-11-01

    Over the last decade, considerable interest in the development of High-Temperature Shape Memory Alloys (HTSMAs) for solid-state actuation has increased dramatically as key applications in the aerospace and automotive industry demand actuation temperatures well above those of conventional SMAs. Most of the research to date has focused on establishing the (forward) connections between chemistry, processing, (micro)structure, properties, and performance. Much less work has been dedicated to the development of frameworks capable of addressing the inverse problem of establishing necessary chemistry and processing schedules to achieve specific performance goals. Integrated Computational Materials Engineering (ICME) has emerged as a powerful framework to address this problem, although it has yet to be applied to the development of HTSMAs. In this paper, the contributions of computational thermodynamics and kinetics to ICME of HTSMAs are described. Some representative examples of the use of computational thermodynamics and kinetics to understand the phase stability and microstructural evolution in HTSMAs are discussed. Some very recent efforts at combining both to assist in the design of HTSMAs and limitations to the full implementation of ICME frameworks for HTSMA development are presented.

  7. Condor-COPASI: high-throughput computing for biochemical networks

    PubMed Central

    2012-01-01

    Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945

  8. A high performance parallel computing architecture for robust image features

    NASA Astrophysics Data System (ADS)

    Zhou, Renyan; Liu, Leibo; Wei, Shaojun

    2014-03-01

    A design of parallel architecture for image feature detection and description is proposed in this article. The major component of this architecture is a 2D cellular network composed of simple reprogrammable processors, enabling the Hessian Blob Detector and Haar Response Calculation, which are the most computing-intensive stage of the Speeded Up Robust Features (SURF) algorithm. Combining this 2D cellular network and dedicated hardware for SURF descriptors, this architecture achieves real-time image feature detection with minimal software in the host processor. A prototype FPGA implementation of the proposed architecture achieves 1318.9 GOPS general pixel processing @ 100 MHz clock and achieves up to 118 fps in VGA (640 × 480) image feature detection. The proposed architecture is stand-alone and scalable so it is easy to be migrated into VLSI implementation.

  9. School factors affecting postsecondary career pursuits of high-achieving girls in mathematics and science

    NASA Astrophysics Data System (ADS)

    Yoo, Hyunsil

    This study examined the influences of secondary school experiences of high-achieving girls in math and science on their postsecondary career pursuits in science fields. Specifically, using the National Education Longitudinal Study of 1988 (NELS:88), the study investigated how science class experiences in high school affect science career persistence of high-achieving girls over and above personal and family factors. Selecting the top 10% on the 8 th grade math and science achievement tests from two panel samples of 1988--1994 and 1988--2000, this study examined which science instructional experiences (i.e., lecture-oriented, experiment-oriented, and student-oriented) best predicted college major choices and postsecondary degree attainments in the fields of science after controlling for personal and family factors. A two-stage test was employed for the analysis of each panel sample. The first test examined the dichotomous career pursuits between science careers and non-science careers and the second test examined the dichotomous pursuits within science careers: "hard" science and "soft" science. Logistic regression procedures were used with consideration of panel weights and design effects. This study identified that experiment-oriented and student-oriented instructional practices seem to positively affect science career pursuits of high-achieving females, while lecture-oriented instruction negatively affected their science career pursuits, and that the longitudinal effects of the two positive instructional contributors to science career pursuits appear to be differential between major choice and degree attainment. This study also found that the influences of instructional practices seem to be slight for general females, while those for high-achieving females were highly considerable, regardless of whether negative or positive. Another result of the study found that only student-oriented instruction seemed to have positive effects for high-achieving males. In

  10. Integrated computational study of ultra-high heat flux cooling using cryogenic micro-solid nitrogen spray

    NASA Astrophysics Data System (ADS)

    Ishimoto, Jun; Oh, U.; Tan, Daisuke

    2012-10-01

    A new type of ultra-high heat flux cooling system using the atomized spray of cryogenic micro-solid nitrogen (SN2) particles produced by a superadiabatic two-fluid nozzle was developed and numerically investigated for application to next generation super computer processor thermal management. The fundamental characteristics of heat transfer and cooling performance of micro-solid nitrogen particulate spray impinging on a heated substrate were numerically investigated and experimentally measured by a new type of integrated computational-experimental technique. The employed Computational Fluid Dynamics (CFD) analysis based on the Euler-Lagrange model is focused on the cryogenic spray behavior of atomized particulate micro-solid nitrogen and also on its ultra-high heat flux cooling characteristics. Based on the numerically predicted performance, a new type of cryogenic spray cooling technique for application to a ultra-high heat power density device was developed. In the present integrated computation, it is clarified that the cryogenic micro-solid spray cooling characteristics are affected by several factors of the heat transfer process of micro-solid spray which impinges on heated surface as well as by atomization behavior of micro-solid particles. When micro-SN2 spraying cooling was used, an ultra-high cooling heat flux level was achieved during operation, a better cooling performance than that with liquid nitrogen (LN2) spray cooling. As micro-SN2 cooling has the advantage of direct latent heat transport which avoids the film boiling state, the ultra-short time scale heat transfer in a thin boundary layer is more possible than in LN2 spray. The present numerical prediction of the micro-SN2 spray cooling heat flux profile can reasonably reproduce the measurement results of cooling wall heat flux profiles. The application of micro-solid spray as a refrigerant for next generation computer processors is anticipated, and its ultra-high heat flux technology is expected

  11. Experimental Realization of High-Efficiency Counterfactual Computation.

    PubMed

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-21

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  12. Experimental Realization of High-Efficiency Counterfactual Computation

    NASA Astrophysics Data System (ADS)

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-01

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  13. Promoting High-Performance Computing and Communications. A CBO Study.

    ERIC Educational Resources Information Center

    Webre, Philip

    In 1991 the Federal Government initiated the multiagency High Performance Computing and Communications program (HPCC) to further the development of U.S. supercomputer technology and high-speed computer network technology. This overview by the Congressional Budget Office (CBO) concentrates on obstacles that might prevent the growth of the…

  14. Effects of Lecture Method Supplemented with Music and Computer Animation on Senior Secondary School Students' Academic Achievement in Electrochemistry

    ERIC Educational Resources Information Center

    Akpoghol, T. V.; Ezeudu, F. O.; Adzape, J. N.; Otor, E. E.

    2016-01-01

    The study investigated the effects of Lecture Method Supplemented with Music (LMM) and Computer Animation (LMC) on senior secondary school students' academic achievement in electrochemistry in Makurdi metropolis. Six research questions and six hypotheses guided the study. The design of the study was quasi experimental, specifically the pre-test,…

  15. High-Speed Computation of the Kleene Star in Max-Plus Algebraic System Using a Cell Broadband Engine

    NASA Astrophysics Data System (ADS)

    Goto, Hiroyuki

    This research addresses a high-speed computation method for the Kleene star of the weighted adjacency matrix in a max-plus algebraic system. We focus on systems whose precedence constraints are represented by a directed acyclic graph and implement it on a Cell Broadband Engine™ (CBE) processor. Since the resulting matrix gives the longest travel times between two adjacent nodes, it is often utilized in scheduling problem solvers for a class of discrete event systems. This research, in particular, attempts to achieve a speedup by using two approaches: parallelization and SIMDization (Single Instruction, Multiple Data), both of which can be accomplished by a CBE processor. The former refers to a parallel computation using multiple cores, while the latter is a method whereby multiple elements are computed by a single instruction. Using the implementation on a Sony PlayStation 3™ equipped with a CBE processor, we found that the SIMDization is effective regardless of the system's size and the number of processor cores used. We also found that the scalability of using multiple cores is remarkable especially for systems with a large number of nodes. In a numerical experiment where the number of nodes is 2000, we achieved a speedup of 20 times compared with the method without the above techniques.

  16. Self-Esteem and Academic Achievement of High School Students

    ERIC Educational Resources Information Center

    Moradi Sheykhjan, Tohid; Jabari, Kamran; Rajeswari, K.

    2014-01-01

    The primary purpose of this study was to determine the influence of self-esteem on academic achievement among high school students in Miandoab City of Iran. The methodology of the research is descriptive and correlation that descriptive and inferential statistics were used to analyze the data. Statistical Society includes male and female high…

  17. Supplementary Education: The Hidden Curriculum of High Academic Achievement

    ERIC Educational Resources Information Center

    Gordon, Edmund W., Ed.; Bridglall, Beatrice L., Ed.; Meroe, Aundra Saa, Ed.

    2004-01-01

    In this book, the editors argue that while access to schools that enable and expect academic achievement is a necessary ingredient for the education of students, schools alone may not be sufficient to ensure universally high levels of academic development. Supplemental educational experiences may also be needed. The idea of supplementary education…

  18. Development of high performance scientific components for interoperability of computing packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulabani, Teena Pratap

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achievedmore » by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.« less

  19. The Effect of Using Computer Games on Lower Basic Stage Students' Achievement in English at Al-SALT Schools

    ERIC Educational Resources Information Center

    Al-Elaimat, Abeer Rashed

    2013-01-01

    The purpose of this study is to investigate the effect of using computer games on the lower basic stage student's achievement in learning English at Al-SALT Schools. The population of this study consisted of all lower basic stage students in AL-SALT schools during the scholastic year 2011-2012. However, the sample of this study consisted of 88…

  20. Efficient and anonymous two-factor user authentication in wireless sensor networks: achieving user anonymity with lightweight sensor computation.

    PubMed

    Nam, Junghyun; Choo, Kim-Kwang Raymond; Han, Sangchul; Kim, Moonseong; Paik, Juryon; Won, Dongho

    2015-01-01

    A smart-card-based user authentication scheme for wireless sensor networks (hereafter referred to as a SCA-WSN scheme) is designed to ensure that only users who possess both a smart card and the corresponding password are allowed to gain access to sensor data and their transmissions. Despite many research efforts in recent years, it remains a challenging task to design an efficient SCA-WSN scheme that achieves user anonymity. The majority of published SCA-WSN schemes use only lightweight cryptographic techniques (rather than public-key cryptographic techniques) for the sake of efficiency, and have been demonstrated to suffer from the inability to provide user anonymity. Some schemes employ elliptic curve cryptography for better security but require sensors with strict resource constraints to perform computationally expensive scalar-point multiplications; despite the increased computational requirements, these schemes do not provide user anonymity. In this paper, we present a new SCA-WSN scheme that not only achieves user anonymity but also is efficient in terms of the computation loads for sensors. Our scheme employs elliptic curve cryptography but restricts its use only to anonymous user-to-gateway authentication, thereby allowing sensors to perform only lightweight cryptographic operations. Our scheme also enjoys provable security in a formal model extended from the widely accepted Bellare-Pointcheval-Rogaway (2000) model to capture the user anonymity property and various SCA-WSN specific attacks (e.g., stolen smart card attacks, node capture attacks, privileged insider attacks, and stolen verifier attacks).

  1. Efficient and Anonymous Two-Factor User Authentication in Wireless Sensor Networks: Achieving User Anonymity with Lightweight Sensor Computation

    PubMed Central

    Nam, Junghyun; Choo, Kim-Kwang Raymond; Han, Sangchul; Kim, Moonseong; Paik, Juryon; Won, Dongho

    2015-01-01

    A smart-card-based user authentication scheme for wireless sensor networks (hereafter referred to as a SCA-WSN scheme) is designed to ensure that only users who possess both a smart card and the corresponding password are allowed to gain access to sensor data and their transmissions. Despite many research efforts in recent years, it remains a challenging task to design an efficient SCA-WSN scheme that achieves user anonymity. The majority of published SCA-WSN schemes use only lightweight cryptographic techniques (rather than public-key cryptographic techniques) for the sake of efficiency, and have been demonstrated to suffer from the inability to provide user anonymity. Some schemes employ elliptic curve cryptography for better security but require sensors with strict resource constraints to perform computationally expensive scalar-point multiplications; despite the increased computational requirements, these schemes do not provide user anonymity. In this paper, we present a new SCA-WSN scheme that not only achieves user anonymity but also is efficient in terms of the computation loads for sensors. Our scheme employs elliptic curve cryptography but restricts its use only to anonymous user-to-gateway authentication, thereby allowing sensors to perform only lightweight cryptographic operations. Our scheme also enjoys provable security in a formal model extended from the widely accepted Bellare-Pointcheval-Rogaway (2000) model to capture the user anonymity property and various SCA-WSN specific attacks (e.g., stolen smart card attacks, node capture attacks, privileged insider attacks, and stolen verifier attacks). PMID:25849359

  2. Training | High-Performance Computing | NREL

    Science.gov Websites

    Training Training Find training resources for using NREL's high-performance computing (HPC) systems as well as related online tutorials. Upcoming Training HPC User Workshop - June 12th We will be Conference, a group meets to discuss Best Practices in HPC Training. This group developed a list of resources

  3. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  4. Early College High School: Closing the Latino Achievement Gap

    ERIC Educational Resources Information Center

    Beall, Kristen Ann

    2016-01-01

    The population of United States Latino students is growing at a rapid rate but their academic achievement lags behind white and Asian students. This issue has significant consequences for the nation's economy, as the job market continues to demand more education and better skills. Early College High School programs have the potential to improve…

  5. Achievement Goals in a Presentation Task: Performance Expectancy, Achievement Goals, State Anxiety, and Task Performance

    ERIC Educational Resources Information Center

    Tanaka, Ayumi; Takehara, Takuma; Yamauchi, Hirotsugu

    2006-01-01

    The aims of the study were to test the linkages between achievement goals to task performance, as mediated by state anxiety arousal. Performance expectancy was also examined as antecedents of achievement goals. A presentation task in a computer practice class was used as achievement task. Fifty-three undergraduates (37 females and 16 males) were…

  6. Achievement as Resistance: The Development of a Critical Race Achievement Ideology among Black Achievers

    ERIC Educational Resources Information Center

    Carter, Dorinda J.

    2008-01-01

    In this article, Dorinda Carter examines the embodiment of a critical race achievement ideology in high-achieving black students. She conducted a yearlong qualitative investigation of the adaptive behaviors that nine high-achieving black students developed and employed to navigate the process of schooling at an upper-class, predominantly white,…

  7. Effects of Using a Computer Algebra System (CAS) on Junior College Students' Attitudes towards CAS and Achievement in Mathematics

    ERIC Educational Resources Information Center

    Leng, Ng Wee; Choo, Kwee Tiow; Soon, Lau Hock; Yi-Huak, Koh; Sun, Yap Yew

    2005-01-01

    This study examines the effects of using Texas Instruments' Voyage 200 calculator (V200), a graphing calculator with a built-in computer algebra system (CAS), on attitudes towards CAS and achievement in mathematics of junior college students (17 year olds). Students' attitudes towards CAS were examined using a 40-item Likert-type instrument…

  8. Developing Long-Term Computing Skills among Low-Achieving Students via Web-Enabled Problem-Based Learning and Self-Regulated Learning

    ERIC Educational Resources Information Center

    Tsai, Chia-Wen; Lee, Tsang-Hsiung; Shen, Pei-Di

    2013-01-01

    Many private vocational schools in Taiwan have taken to enrolling students with lower levels of academic achievement. The authors re-designed a course and conducted a series of quasi-experiments to develop students' long-term computing skills, and examined the longitudinal effects of web-enabled, problem-based learning (PBL) and self-regulated…

  9. Student Academic Achievement in Rural vs. Non-Rural High Schools in Wisconsin

    ERIC Educational Resources Information Center

    Droessler Mersch, Rebecca L.

    2012-01-01

    This study analyzed how Wisconsin rural public high schools' academic achievement compared to their city, suburb and town peers while controlling for ten factors. The Wisconsin Knowledge and Concepts Examination (WKCE) measured academic achievement for tenth graders including reading, language arts, mathematics, science and social studies. The ten…

  10. DURIP: High Performance Computing in Biomathematics Applications

    DTIC Science & Technology

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  11. High-throughput neuroimaging-genetics computational infrastructure

    PubMed Central

    Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize

  12. Computers, Networks, and Desegregation at San Jose High Academy.

    ERIC Educational Resources Information Center

    Solomon, Gwen

    1987-01-01

    Describes magnet high school which was created in California to meet desegregation requirements and emphasizes computer technology. Highlights include local computer networks that connect science and music labs, the library/media center, business computer lab, writing lab, language arts skills lab, and social studies classrooms; software; teacher…

  13. The Impact of Developmental Advising for High-Achieving Minority Students.

    ERIC Educational Resources Information Center

    Novels, Alphonse N.; Ender, Steven C.

    1988-01-01

    The impact of developmental advising activities with high-achieving Black students at Indiana University of Pennsylvania was investigated. Results indicate that involvement in developmental advising had a positive impact on participating students' cumulative grade point average. (Author/MLW)

  14. High throughput computing: a solution for scientific analysis

    USGS Publications Warehouse

    O'Donnell, M.

    2011-01-01

    handle job failures due to hardware, software, or network interruptions (obviating the need to manually resubmit the job after each stoppage); be affordable; and most importantly, allow us to complete very large, complex analyses that otherwise would not even be possible. In short, we envisioned a job-management system that would take advantage of unused FORT CPUs within a local area network (LAN) to effectively distribute and run highly complex analytical processes. What we found was a solution that uses High Throughput Computing (HTC) and High Performance Computing (HPC) systems to do exactly that (Figure 1).

  15. Generic Divide and Conquer Internet-Based Computing

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J. (Technical Monitor); Radenski, Atanas

    2003-01-01

    The growth of Internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of Peer to Peer (P2P) software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high-performance computing applications community. The general goal of this project is to achieve better understanding of the transition to Internet-based high-performance computing and to develop solutions for some of the technical challenges of this transition. In particular, we are interested in creating long-term motivation for end users to provide their idle processor time to support computationally intensive tasks. We believe that a practical P2P architecture should provide useful service to both clients with high-performance computing needs and contributors of lower-end computing resources. To achieve this, we are designing dual -service architecture for P2P high-performance divide-and conquer computing; we are also experimenting with a prototype implementation. Our proposed architecture incorporates a master server, utilizes dual satellite servers, and operates on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. A dual satellite server comprises a high-performance computing engine and a lower-end contributor service engine. The computing engine provides generic support for divide and conquer computations. The service engine is intended to provide free useful HTTP-based services to contributors of lower-end computing resources. Our proposed architecture is complementary to and accessible from computational grids, such as Globus, Legion, and Condor. Grids provide remote access to existing higher-end computing resources; in contrast, our goal is to utilize idle processor time of

  16. Parent Involvement Practices of High-Achieving Elementary Science Students

    NASA Astrophysics Data System (ADS)

    Waller, Samara Susan

    This study addressed a prevalence of low achievement in science courses in an urban school district in Georgia. National leaders and educators have identified the improvement of science proficiency as critical to the future of American industry. The purpose of this study was to examine parent involvement in this school district and its contribution to the academic achievement of successful science students. Social capital theory guided this study by suggesting that students achieve best when investments are made into their academic and social development. A collective case study qualitative research design was used to interview 9 parent participants at 2 elementary schools whose children scored in the exceeds category on the Science CRCT. The research questions focused on what these parents did at home to support their children's academic achievement. Data were collected using a semi-structured interview protocol and analyzed through the categorical aggregation of transcribed interviews. Key findings revealed that the parents invested time and resources in 3 practices: communicating high expectations, supporting and developing key skills, and communicating with teachers. These findings contribute to social change at both the local and community level by creating a starting point for teachers, principals, and district leaders to reexamine the value of parent input in the educational process, and by providing data to support the revision of current parent involvement policies. Possibilities for further study building upon the findings of this study may focus on student perceptions of their parents' parenting as it relates to their science achievement.

  17. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  18. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  19. The Effect of Problem-Solving Instruction on the Programming Self-efficacy and Achievement of Introductory Computer Science Students

    NASA Astrophysics Data System (ADS)

    Maddrey, Elizabeth

    Research in academia and industry continues to identify a decline in enrollment in computer science. One major component of this decline in enrollment is a shortage of female students. The primary reasons for the gender gap presented in the research include lack of computer experience prior to their first year in college, misconceptions about the field, negative cultural stereotypes, lack of female mentors and role models, subtle discriminations in the classroom, and lack of self-confidence (Pollock, McCoy, Carberry, Hundigopal, & You, 2004). Male students are also leaving the field due to misconceptions about the field, negative cultural stereotypes, and a lack of self-confidence. Analysis of first year attrition revealed that one of the major challenges faced by students of both genders is a lack of problem-solving skills (Beaubouef, Lucas & Howatt, 2001; Olsen, 2005; Paxton & Mumey, 2001). The purpose of this study was to investigate whether specific, non-mathematical problem-solving instruction as part of introductory programming courses significantly increased computer programming self-efficacy and achievement of students. The results of this study showed that students in the experimental group had significantly higher achievement than students in the control group. While this shows statistical significance, due to the effect size and disordinal nature of the data between groups, care has to be taken in its interpretation. The study did not show significantly higher programming self-efficacy among the experimental students. There was not enough data collected to statistically analyze the effect of the treatment on self-efficacy and achievement by gender. However, differences in means were observed between the gender groups, with females in the experimental group demonstrating a higher than average degree of self-efficacy when compared with males in the experimental group and both genders in the control group. These results suggest that the treatment from this

  20. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    NASA Technical Reports Server (NTRS)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  1. Integrated Computational Materials Engineering Development of Advanced High Strength Steel for Lightweight Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hector, Jr., Louis G.; McCarty, Eric D.

    The goal of the ICME 3GAHSS project was to successfully demonstrate the applicability of Integrated Computational Materials Engineering (ICME) for the development and deployment of third generation advanced high strength steels (3GAHSS) for immediate weight reduction in passenger vehicles. The ICME approach integrated results from well-established computational and experimental methodologies to develop a suite of material constitutive models (deformation and failure), manufacturing process and performance simulation modules, a properties database, as well as the computational environment linking them together for both performance prediction and material optimization. This is the Final Report for the ICME 3GAHSS project, which achieved the fol-lowingmore » objectives: 1) Developed a 3GAHSS ICME model, which includes atomistic, crystal plasticity, state variable and forming models. The 3GAHSS model was implemented in commercially available LS-DYNA and a user guide was developed to facilitate use of the model. 2) Developed and produced two 3GAHSS alloys using two different chemistries and manufacturing processes, for use in calibrating and validating the 3GAHSS ICME Model. 3) Optimized the design of an automotive subassembly by substituting 3GAHSS for AHSS yielding a design that met or exceeded all baseline performance requirements with a 30% mass savings. A technical cost model was also developed to estimate the cost per pound of weight saved when substituting 3GAHSS for AHSS. The project demonstrated the potential for 3GAHSS to achieve up to 30% weight savings in an automotive structure at a cost penalty of up to $0.32 to $1.26 per pound of weight saved. The 3GAHSS ICME Model enables the user to design 3GAHSS to desired mechanical properties in terms of strength and ductility.« less

  2. The Strengths of High-Achieving Black High School Students in a Racially Diverse Setting

    ERIC Educational Resources Information Center

    Marsh, Kris; Chaney, Cassandra; Jones, Derrick

    2012-01-01

    Robert Hill (1972) identified strengths of Black families: strong kinship bonds, strong work orientation, adaptability of family roles, high achievement orientation, and religious orientation. Some suggest these strengths sustain the physical, emotional, social, and spiritual needs of Blacks. This study used narratives and survey data from a…

  3. Radio-frequency measurement in semiconductor quantum computation

    NASA Astrophysics Data System (ADS)

    Han, TianYi; Chen, MingBo; Cao, Gang; Li, HaiOu; Xiao, Ming; Guo, GuoPing

    2017-05-01

    Semiconductor quantum dots have attracted wide interest for the potential realization of quantum computation. To realize efficient quantum computation, fast manipulation and the corresponding readout are necessary. In the past few decades, considerable progress of quantum manipulation has been achieved experimentally. To meet the requirements of high-speed readout, radio-frequency (RF) measurement has been developed in recent years, such as RF-QPC (radio-frequency quantum point contact) and RF-DGS (radio-frequency dispersive gate sensor). Here we specifically demonstrate the principle of the radio-frequency reflectometry, then review the development and applications of RF measurement, which provides a feasible way to achieve high-bandwidth readout in quantum coherent control and also enriches the methods to study these artificial mesoscopic quantum systems. Finally, we prospect the future usage of radio-frequency reflectometry in scaling-up of the quantum computing models.

  4. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  5. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  6. Support Expressed in Congress for U.S. High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2004-06-01

    Advocates for a stronger U.S. position in high-performance computing-which could help with a number of grand challenges in the Earth sciences and other disciplines-hope that legislation recently introduced in the House of Representatives, and, will help to revitalize U.S. efforts. The High-Performance Computing Revitalization Act of 2004 would amend the earlier High-Performance Computing Act of 1991 (Public Law 102-194), which is partially credited with helping to strengthen U.S. capabilities in this area. The bill has the support of the Bush administration.

  7. Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing

    NASA Astrophysics Data System (ADS)

    Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey

    Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.

  8. The Effects of Computer-Assisted Instruction on the Achievement, Attitudes and Retention of Fourth Grade Mathematics Students in North Cyprus

    ERIC Educational Resources Information Center

    Pilli, Olga; Aksu, Meral

    2013-01-01

    The purpose of this study was to examine the effects of the educational software "Frizbi Mathematics 4" on 4th grade student's mathematics achievement, retention, attitudes toward mathematics and attitude toward computer assisted learning. Two groups (experimental and control) of students from the state primary school in Gazimagusa,…

  9. A study of the effects of gender and different instructional media (computer-assisted instruction tutorials vs. textbook) on student attitudes and achievement in a team-taught integrated science class

    NASA Astrophysics Data System (ADS)

    Eardley, Julie Anne

    The purpose of this study was to determine the effect of different instructional media (computer assisted instruction (CAI) tutorial vs. traditional textbook) on student attitudes toward science and computers and achievement scores in a team-taught integrated science course, ENS 1001, "The Whole Earth Course," which was offered at Florida Institute of Technology during the Fall 2000 term. The effect of gender on student attitudes toward science and computers and achievement scores was also investigated. This study employed a randomized pretest-posttest control group experimental research design with a sample of 30 students (12 males and 18 females). Students had registered for weekly lab sessions that accompanied the course and had been randomly assigned to the treatment or control group. The treatment group used a CAI tutorial for completing homework assignments and the control group used the required textbook for completing homework assignments. The Attitude toward Science and Computers Questionnaire and Achievement Test were the two instruments administered during this study to measure students' attitudes and achievement score changes. A multivariate analysis of covariance (MANCOVA), using hierarchical multiple regression/correlation (MRC), was employed to determine: (1) treatment versus control group attitude and achievement differences; and (2) male versus female attitude and achievement differences. The differences between the treatment group's and control group's homework averages were determined by t test analyses. The overall MANCOVA model was found to be significant at p < .05. Examining research factor set independent variables separately resulted in gender being the only variable that significantly contributed in explaining the variability in a dependent variable, attitudes toward science and computers. T test analyses of the homework averages showed no significant differences. Contradictory to the findings of this study, anecdotal information from

  10. Computer Programmed Milling Machine Operations. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Leonard, Dennis

    This learning module for a high school metals and manufacturing course is designed to introduce the concept of computer-assisted machining (CAM). Through it, students learn how to set up and put data into the controller to machine a part. They also become familiar with computer-aided manufacturing and learn the advantages of computer numerical…

  11. Examining Organizational Practices That Predict Persistence among High-Achieving Black Males in High School

    ERIC Educational Resources Information Center

    Anderson, Kenneth Alonzo

    2016-01-01

    Background/Context: This article summarizes an increasing trend of antideficit Black male research in mathematics and highlights opportunities to add to the research. A review of the literature shows that antideficit researchers often examine relationships between individual traits and persistence of high-achieving Black males in mathematics.…

  12. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    NASA Astrophysics Data System (ADS)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long

  13. Open-wedge high tibial osteotomy: comparison between manual and computer-assisted techniques.

    PubMed

    Iorio, R; Pagnottelli, M; Vadalà, A; Giannetti, S; Di Sette, P; Papandrea, P; Conteduca, F; Ferretti, A

    2013-01-01

    The purpose of our study was to compare clinical and radiological results of two groups of patients treated for medial compartment osteoarthritis of the knee with either conventional or computer-assisted open-wedge high tibial osteotomy (HTO). Goals of surgical treatment were a correction of the mechanical axis between 2° and 6° of valgus and a modification of posterior tibial slope between -2° and +2°. Twenty-four patients (27 knees) affected by varus knee deformity and operated with HTO were prospectively followed-up. They were randomly divided in two groups, A (11 patients, conventional treatment) and B (13 patients, navigated treatment). The American Knee Society Score and the Modified Cincinnati Rating System Questionnaire were used for clinical assessment. All patients were radiologically evaluated with a comparative lower limb weight-bearing digital radiograph, a standard digital anteroposterior, a latero-lateral radiograph of the knee, and a Rosenberg view. Patients were followed-up at a mean of 39 months. Clinical evaluation showed no statistical difference (n.s.) between the two groups. Radiological results showed an 86% reproducibility in achieving a mechanical axis of 182°-186° in group B compared to a 23% in group A (p = 0.0392); furthermore, in group B, we achieved a modification of posterior tibial slope between -2° and +2° in 100% of patients, while in group A, this goal was achieved only in 24% of cases (p = 0.0021). High tibial osteotomy with navigator is more accurate and reproducible in the correction of the deformity compared to standard technique. Therapeutic study, Level II.

  14. Attitude, Gender and Achievement in Computer Programming

    ERIC Educational Resources Information Center

    Baser, Mustafa

    2013-01-01

    The aim of this research was to explore the relationship among students' attitudes toward programming, gender and academic achievement in programming. The scale used for measuring students' attitudes toward programming was developed by the researcher and consisted of 35 five-point Likert type items in four subscales. The scale was administered to…

  15. Computational design of high efficiency release targets for use at ISOL facilities

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Alton, G. D.; Middleton, J. W.

    1999-06-01

    This report describes efforts made at the Oak Ridge National Laboratory to design high-efficiency-release targets that simultaneously incorporate the short diffusion lengths, high permeabilities, controllable temperatures, and heat removal properties required for the generation of useful radioactive ion beam (RIB) intensities for nuclear physics and astrophysics research using the isotope separation on-line (ISOL) technique. Short diffusion lengths are achieved either by using thin fibrous target materials or by coating thin layers of selected target material onto low-density carbon fibers such as reticulated vitreous carbon fiber (RVCF) or carbon-bonded-carbon-fiber (CBCF) to form highly permeable composite target matrices. Computational studies which simulate the generation and removal of primary beam deposited heat from target materials have been conducted to optimize the design of target/heat-sink systems for generating RIBs. The results derived from diffusion release-rate simulation studies for selected targets and thermal analyses of temperature distributions within a prototype target/heat-sink system subjected to primary ion beam irradiation will be presented in this report.

  16. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  17. Effectiveness of Computer-Assisted Pronunciation Teaching and Verbal Ability on the Achievement of Senior Secondary School Students in Oral English

    ERIC Educational Resources Information Center

    Gambari, Amosa Isiaka; Kutigi, Amina Usman; Fagbemi, Patricia O.

    2014-01-01

    This study investigated the effectiveness of a computer-assisted pronunciation teaching (CAPT) package on the achievement of senior secondary students in oral English in Minna, Nigeria. It also examined the influence of CAPT on verbal ability and gender. The sample consisted of sixty senior secondary school students drawn from two coeducational…

  18. High Performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions

    DTIC Science & Technology

    2016-08-30

    High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions A dedicated high-performance computer cluster was...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Computer cluster ...peer-reviewed journals: Final Report: High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions Report Title A dedicated

  19. School Achievement and Performance in Chilean High Schools: The Mediating Role of Subjective Wellbeing in School-Related Evaluations

    PubMed Central

    López, Verónica; Oyanedel, Juan C.; Bilbao, Marian; Torres, Javier; Oyarzún, Denise; Morales, Macarena; Ascorra, Paula; Carrasco, Claudia

    2017-01-01

    School achievement gaps and school failure are problematic issues in Latin America, and are mainly explained by the socio-economic status (SES) of the students. What schools can do to improve school achievement and reduce school failure is a critical issue, both for school management and teacher training. In this study, we present the association of individual and school-related socio-emotional variables with school achievement and performance, controlling for the effects of SES. A probabilistic sample of 4,964 students, drawn from 191 schools enrolled in year 10 in urban areas of Chile, answered questionnaires assessing subjective wellbeing, social wellbeing in school, school climate, school social wellbeing and students’ perceptions of teachers’ wellbeing. Using structural equation modeling, and controlling for SES, we modeled subjective wellbeing as a mediator of the relationship between school-related variables, such as school climate and perception of teacher’s wellbeing, and (a) school achievement, and (b) school performance. School achievement was computed as a product of (a) the probability of passing the school year, and (b) the percentage of yearly attendance at school. Data on school achievement was drawn from administrative registries from the Chilean Ministry of Education. School performance was computed as the estimated grade point average (GPA) at the end of the school year, based on the students’ previous 5-year GPAs, and was also obtained through administrative data of the last 5 years. Findings reveal the mediating role of subjective wellbeing in the relationship between school-related evaluations (students’ social wellbeing at school, their perception of teachers’ wellbeing and school climate) and school achievement. For school achievement, two variables were mediated (students’ social wellbeing at school and school climate). However, for school performance, no significant mediations were found. We conclude that, on the one hand

  20. School Achievement and Performance in Chilean High Schools: The Mediating Role of Subjective Wellbeing in School-Related Evaluations.

    PubMed

    López, Verónica; Oyanedel, Juan C; Bilbao, Marian; Torres, Javier; Oyarzún, Denise; Morales, Macarena; Ascorra, Paula; Carrasco, Claudia

    2017-01-01

    School achievement gaps and school failure are problematic issues in Latin America, and are mainly explained by the socio-economic status (SES) of the students. What schools can do to improve school achievement and reduce school failure is a critical issue, both for school management and teacher training. In this study, we present the association of individual and school-related socio-emotional variables with school achievement and performance, controlling for the effects of SES. A probabilistic sample of 4,964 students, drawn from 191 schools enrolled in year 10 in urban areas of Chile, answered questionnaires assessing subjective wellbeing, social wellbeing in school, school climate, school social wellbeing and students' perceptions of teachers' wellbeing. Using structural equation modeling, and controlling for SES, we modeled subjective wellbeing as a mediator of the relationship between school-related variables, such as school climate and perception of teacher's wellbeing, and (a) school achievement, and (b) school performance. School achievement was computed as a product of (a) the probability of passing the school year, and (b) the percentage of yearly attendance at school. Data on school achievement was drawn from administrative registries from the Chilean Ministry of Education. School performance was computed as the estimated grade point average (GPA) at the end of the school year, based on the students' previous 5-year GPAs, and was also obtained through administrative data of the last 5 years. Findings reveal the mediating role of subjective wellbeing in the relationship between school-related evaluations (students' social wellbeing at school, their perception of teachers' wellbeing and school climate) and school achievement. For school achievement, two variables were mediated (students' social wellbeing at school and school climate). However, for school performance, no significant mediations were found. We conclude that, on the one hand, after

  1. Toward achieving flexible and high sensitivity hexagonal boron nitride neutron detectors

    NASA Astrophysics Data System (ADS)

    Maity, A.; Grenadier, S. J.; Li, J.; Lin, J. Y.; Jiang, H. X.

    2017-07-01

    Hexagonal boron nitride (h-BN) detectors have demonstrated the highest thermal neutron detection efficiency to date among solid-state neutron detectors at about 51%. We report here the realization of h-BN neutron detectors possessing one order of magnitude enhancement in the detection area but maintaining an equal level of detection efficiency of previous achievement. These 3 mm × 3 mm detectors were fabricated from 50 μm thick freestanding and flexible 10B enriched h-BN (h-10BN) films, grown by metal organic chemical vapor deposition followed by mechanical separation from sapphire substrates. Mobility-lifetime results suggested that holes are the majority carriers in unintentionally doped h-BN. The detectors were tested under thermal neutron irradiation from californium-252 (252Cf) moderated by a high density polyethylene moderator. A thermal neutron detection efficiency of ˜53% was achieved at a bias voltage of 200 V. Conforming to traditional solid-state detectors, the realization of h-BN epilayers with enhanced electrical transport properties is the key to enable scaling up the device sizes. More specifically, the present results revealed that achieving an electrical resistivity of greater than 1014 Ωṡcm and a leakage current density of below 3 × 10-10 A/cm2 is needed to fabricate large area h-BN detectors and provided guidance for achieving high sensitivity solid state neutron detectors based on h-BN.

  2. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  3. Beyond Academic Reputation: Factors that Influence the College of First Choice for High Achieving Students

    ERIC Educational Resources Information Center

    Schoenherr, Holly J.

    2009-01-01

    Studies that have investigated college choice factors for high-achieving students repeatedly cite academic reputation as one of the top indicators of choice but have not indicated why some high-achieving students choose to attend universities with a less prestigious reputation than the more highly prestigious options available to them. The purpose…

  4. High Performance Computing Software Applications for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  5. Students' Perceptions of Computer-Based Learning Environments, Their Attitude towards Business Statistics, and Their Academic Achievement: Implications from a UK University

    ERIC Educational Resources Information Center

    Nguyen, ThuyUyen H.; Charity, Ian; Robson, Andrew

    2016-01-01

    This study investigates students' perceptions of computer-based learning environments, their attitude towards business statistics, and their academic achievement in higher education. Guided by learning environments concepts and attitudinal theory, a theoretical model was proposed with two instruments, one for measuring the learning environment and…

  6. Does High School Facility Quality Affect Student Achievement? A Two-Level Hierarchical Linear Model

    ERIC Educational Resources Information Center

    Bowers, Alex J.; Urick, Angela

    2011-01-01

    The purpose of this study is to isolate the independent effects of high school facility quality on student achievement using a large, nationally representative U.S. database of student achievement and school facility quality. Prior research on linking school facility quality to student achievement has been mixed. Studies that relate overall…

  7. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  8. Home media and children's achievement and behavior.

    PubMed

    Hofferth, Sandra L

    2010-01-01

    This study provides a national picture of the time American 6- to 12-year-olds spent playing video games, using the computer, and watching TV at home in 1997 and 2003, and the association of early use with their achievement and behavior as adolescents. Girls benefited from computer use more than boys, and Black children benefited more than White children. Greater computer use in middle childhood was associated with increased achievement for White and Black girls, and for Black but not White boys. Increased video game play was associated with an improved ability to solve applied problems for Black girls but lower verbal achievement for all girls. For boys, increased video game play was linked to increased aggressive behavior problems. © 2010 The Author. Child Development © 2010 Society for Research in Child Development, Inc.

  9. P2P Technology for High-Performance Computing: An Overview

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J. (Technical Monitor); Berry, Jason

    2003-01-01

    The transition from cluster computing to peer-to-peer (P2P) high-performance computing has recently attracted the attention of the computer science community. It has been recognized that existing local networks and dedicated clusters of headless workstations can serve as inexpensive yet powerful virtual supercomputers. It has also been recognized that the vast number of lower-end computers connected to the Internet stay idle for as long as 90% of the time. The growing speed of Internet connections and the high availability of free CPU time encourage exploration of the possibility to use the whole Internet rather than local clusters as massively parallel yet almost freely available P2P supercomputer. As a part of a larger project on P2P high-performance computing, it has been my goal to compile an overview of the 2P2 paradigm. I have studied various P2P platforms and I have compiled systematic brief descriptions of their most important characteristics. I have also experimented and obtained hands-on experience with selected P2P platforms focusing on those that seem promising with respect to P2P high-performance computing. I have also compiled relevant literature and web references. I have prepared a draft technical report and I have summarized my findings in a poster paper.

  10. Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods

    NASA Astrophysics Data System (ADS)

    Ullrich, P. A.; Guerra, J. E.

    2014-12-01

    The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.

  11. Achieving Literacy Excellence through Identifying and Utilizing High Yield Strategies

    ERIC Educational Resources Information Center

    Hardison, Ashley

    2017-01-01

    The purpose of this study was to delve into the literacy instructional strategies of selected high-performing K-2 teachers in a Clark County, Nevada school district. The study assessed the efficacy of teachers using five core literacy components: phonemic awareness, phonics, vocabulary, fluency, and comprehension for student achievement. High…

  12. Computer-Based Training in Math and Working Memory Improves Cognitive Skills and Academic Achievement in Primary School Children: Behavioral Results

    PubMed Central

    Sánchez-Pérez, Noelia; Castillo, Alejandro; López-López, José A.; Pina, Violeta; Puga, Jorge L.; Campoy, Guillermo; González-Salinas, Carmen; Fuentes, Luis J.

    2018-01-01

    Student academic achievement has been positively related to further development outcomes, such as the attainment of higher educational, employment, and socioeconomic aspirations. Among all the academic competences, mathematics has been identified as an essential skill in the field of international leadership as well as for those seeking positions in disciplines related to science, technology, and engineering. Given its positive consequences, studies have designed trainings to enhance children's mathematical skills. Additionally, the ability to regulate and control actions and cognitions, i.e., executive functions (EF), has been associated with school success, which has resulted in a strong effort to develop EF training programs to improve students' EF and academic achievement. The present study examined the efficacy of a school computer-based training composed of two components, namely, working memory and mathematics tasks. Among the advantages of using a computer-based training program is the ease with which it can be implemented in school settings and the ease by which the difficulty of the tasks can be adapted to fit the child's ability level. To test the effects of the training, children's cognitive skills (EF and IQ) and their school achievement (math and language grades and abilities) were evaluated. The results revealed a significant improvement in cognitive skills, such as non-verbal IQ and inhibition, and better school performance in math and reading among the children who participated in the training compared to those children who did not. Most of the improvements were related to training on WM tasks. These findings confirmed the efficacy of a computer-based training that combined WM and mathematics activities as part of the school routines based on the training's impact on children's academic competences and cognitive skills. PMID:29375442

  13. Computer-Based Training in Math and Working Memory Improves Cognitive Skills and Academic Achievement in Primary School Children: Behavioral Results.

    PubMed

    Sánchez-Pérez, Noelia; Castillo, Alejandro; López-López, José A; Pina, Violeta; Puga, Jorge L; Campoy, Guillermo; González-Salinas, Carmen; Fuentes, Luis J

    2017-01-01

    Student academic achievement has been positively related to further development outcomes, such as the attainment of higher educational, employment, and socioeconomic aspirations. Among all the academic competences, mathematics has been identified as an essential skill in the field of international leadership as well as for those seeking positions in disciplines related to science, technology, and engineering. Given its positive consequences, studies have designed trainings to enhance children's mathematical skills. Additionally, the ability to regulate and control actions and cognitions, i.e., executive functions (EF), has been associated with school success, which has resulted in a strong effort to develop EF training programs to improve students' EF and academic achievement. The present study examined the efficacy of a school computer-based training composed of two components, namely, working memory and mathematics tasks. Among the advantages of using a computer-based training program is the ease with which it can be implemented in school settings and the ease by which the difficulty of the tasks can be adapted to fit the child's ability level. To test the effects of the training, children's cognitive skills (EF and IQ) and their school achievement (math and language grades and abilities) were evaluated. The results revealed a significant improvement in cognitive skills, such as non-verbal IQ and inhibition, and better school performance in math and reading among the children who participated in the training compared to those children who did not. Most of the improvements were related to training on WM tasks. These findings confirmed the efficacy of a computer-based training that combined WM and mathematics activities as part of the school routines based on the training's impact on children's academic competences and cognitive skills.

  14. Social Networks Impact Factor on Students' Achievements and Attitudes towards the "Computer in Teaching" Course at the College of Education

    ERIC Educational Resources Information Center

    Elfeky, Abdellah

    2017-01-01

    The study aims to examine the impact of social networks of a "Computer in Teaching" course on the achievement and attitudes students at the faculty of education at Najran University. The sample consists of (60) students from the third level in the special education program, (30) students represented the control group whereas the other…

  15. High-Performance Computing Unlocks Innovation at NREL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Need to fly around a wind farm? Or step inside a molecule? NREL scientists use a super powerful (and highly energy-efficient) computer to visualize and solve big problems in renewable energy research.

  16. High resolution computational on-chip imaging of biological samples using sparsity constraint (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rivenson, Yair; Wu, Chris; Wang, Hongda; Zhang, Yibo; Ozcan, Aydogan

    2017-03-01

    Microscopic imaging of biological samples such as pathology slides is one of the standard diagnostic methods for screening various diseases, including cancer. These biological samples are usually imaged using traditional optical microscopy tools; however, the high cost, bulkiness and limited imaging throughput of traditional microscopes partially restrict their deployment in resource-limited settings. In order to mitigate this, we previously demonstrated a cost-effective and compact lens-less on-chip microscopy platform with a wide field-of-view of >20-30 mm^2. The lens-less microscopy platform has shown its effectiveness for imaging of highly connected biological samples, such as pathology slides of various tissue samples and smears, among others. This computational holographic microscope requires a set of super-resolved holograms acquired at multiple sample-to-sensor distances, which are used as input to an iterative phase recovery algorithm and holographic reconstruction process, yielding high-resolution images of the samples in phase and amplitude channels. Here we demonstrate that in order to reconstruct clinically relevant images with high resolution and image contrast, we require less than 50% of the previously reported nominal number of holograms acquired at different sample-to-sensor distances. This is achieved by incorporating a loose sparsity constraint as part of the iterative holographic object reconstruction. We demonstrate the success of this sparsity-based computational lens-less microscopy platform by imaging pathology slides of breast cancer tissue and Papanicolaou (Pap) smears.

  17. High-order hydrodynamic algorithms for exascale computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Nathaniel Ray

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broadmore » range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.« less

  18. High school computer science education paves the way for higher education: the Israeli case

    NASA Astrophysics Data System (ADS)

    Armoni, Michal; Gal-Ezer, Judith

    2014-07-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to computer science in high school and pursuing computing in higher education. We also examine the gender gap, in the context of high school computer science education. We show that in Israel, students who took the high-level computer science matriculation exam were more likely to pursue computing in higher education. Regarding the issue of gender, we will show that, in general, in Israel the difference between males and females who take computer science in high school is relatively small, and a larger, though still not very large difference exists only for the highest exam level. In addition, exposing females to high-level computer science in high school has more relative impact on pursuing higher education in computing.

  19. High Achieving Girls in Mathematics: What's Wrong with Working Hard?

    ERIC Educational Resources Information Center

    Howe, Ann C.; Berenson, Sarah B.

    2003-01-01

    The participation of women in graduate studies and mathematics-related careers remains a social and economic problem in the United States. Part of a larger study to understand this lack of participation, here we present preliminary findings of girls who are high achievers in middle grades mathematics. This interpretive study documents girls'…

  20. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  1. SCEAPI: A unified Restful Web API for High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  2. Monitoring SLAC High Performance UNIX Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less

  3. The Effects of the Computer-Based Instruction on the Achievement and Problem Solving Skills of the Science and Technology Students

    ERIC Educational Resources Information Center

    Serin, Oguz

    2011-01-01

    This study aims to investigate the effects of the computer-based instruction on the achievements and problem solving skills of the science and technology students. This is a study based on the pre-test/post-test control group design. The participants of the study consist of 52 students; 26 in the experimental group, 26 in the control group. The…

  4. Gender and High School Chemistry: Student Perceptions on Achievement in a Selective Setting

    ERIC Educational Resources Information Center

    Cousins, Andrew; Mills, Martin

    2015-01-01

    This paper reports on research undertaken in a middle-class Australian school. The focus of the research was on the relationship between gender and students' engagement with high school chemistry. Achievement data from many OECD [Organisation for Economic Co-operation and Development] countries suggest that middle-class girls are achieving equally…

  5. Chip-scale integrated optical interconnects: a key enabler for future high-performance computing

    NASA Astrophysics Data System (ADS)

    Haney, Michael; Nair, Rohit; Gu, Tian

    2012-01-01

    High Performance Computing (HPC) systems are putting ever-increasing demands on the throughput efficiency of their interconnection fabrics. In this paper, the limits of conventional metal trace-based inter-chip interconnect fabrics are examined in the context of state-of-the-art HPC systems, which currently operate near the 1 GFLOPS/W level. The analysis suggests that conventional metal trace interconnects will limit performance to approximately 6 GFLOPS/W in larger HPC systems that require many computer chips to be interconnected in parallel processing architectures. As the HPC communications bottlenecks push closer to the processing chips, integrated Optical Interconnect (OI) technology may provide the ultra-high bandwidths needed at the inter- and intra-chip levels. With inter-chip photonic link energies projected to be less than 1 pJ/bit, integrated OI is projected to enable HPC architecture scaling to the 50 GFLOPS/W level and beyond - providing a path to Peta-FLOPS-level HPC within a single rack, and potentially even Exa-FLOPSlevel HPC for large systems. A new hybrid integrated chip-scale OI approach is described and evaluated. The concept integrates a high-density polymer waveguide fabric directly on top of a multiple quantum well (MQW) modulator array that is area-bonded to the Silicon computing chip. Grayscale lithography is used to fabricate 5 μm x 5 μm polymer waveguides and associated novel small-footprint total internal reflection-based vertical input/output couplers directly onto a layer containing an array of GaAs MQW devices configured to be either absorption modulators or photodetectors. An external continuous wave optical "power supply" is coupled into the waveguide links. Contrast ratios were measured using a test rider chip in place of a Silicon processing chip. The results suggest that sub-pJ/b chip-scale communication is achievable with this concept. When integrated into high-density integrated optical interconnect fabrics, it could provide

  6. Optimization of spatiotemporally fractionated radiotherapy treatments with bounds on the achievable benefit

    NASA Astrophysics Data System (ADS)

    Gaddy, Melissa R.; Yıldız, Sercan; Unkelbach, Jan; Papp, Dávid

    2018-01-01

    Spatiotemporal fractionation schemes, that is, treatments delivering different dose distributions in different fractions, can potentially lower treatment side effects without compromising tumor control. This can be achieved by hypofractionating parts of the tumor while delivering approximately uniformly fractionated doses to the surrounding tissue. Plan optimization for such treatments is based on biologically effective dose (BED); however, this leads to computationally challenging nonconvex optimization problems. Optimization methods that are in current use yield only locally optimal solutions, and it has hitherto been unclear whether these plans are close to the global optimum. We present an optimization framework to compute rigorous bounds on the maximum achievable normal tissue BED reduction for spatiotemporal plans. The approach is demonstrated on liver tumors, where the primary goal is to reduce mean liver BED without compromising any other treatment objective. The BED-based treatment plan optimization problems are formulated as quadratically constrained quadratic programming (QCQP) problems. First, a conventional, uniformly fractionated reference plan is computed using convex optimization. Then, a second, nonconvex, QCQP model is solved to local optimality to compute a spatiotemporally fractionated plan that minimizes mean liver BED, subject to the constraints that the plan is no worse than the reference plan with respect to all other planning goals. Finally, we derive a convex relaxation of the second model in the form of a semidefinite programming problem, which provides a rigorous lower bound on the lowest achievable mean liver BED. The method is presented on five cases with distinct geometries. The computed spatiotemporal plans achieve 12-35% mean liver BED reduction over the optimal uniformly fractionated plans. This reduction corresponds to 79-97% of the gap between the mean liver BED of the uniform reference plans and our lower bounds on the lowest

  7. High performance computing for deformable image registration: towards a new paradigm in adaptive radiotherapy.

    PubMed

    Samant, Sanjiv S; Xia, Junyi; Muyan-Ozcelik, Pinar; Owens, John D

    2008-08-01

    The advent of readily available temporal imaging or time series volumetric (4D) imaging has become an indispensable component of treatment planning and adaptive radiotherapy (ART) at many radiotherapy centers. Deformable image registration (DIR) is also used in other areas of medical imaging, including motion corrected image reconstruction. Due to long computation time, clinical applications of DIR in radiation therapy and elsewhere have been limited and consequently relegated to offline analysis. With the recent advances in hardware and software, graphics processing unit (GPU) based computing is an emerging technology for general purpose computation, including DIR, and is suitable for highly parallelized computing. However, traditional general purpose computation on the GPU is limited because the constraints of the available programming platforms. As well, compared to CPU programming, the GPU currently has reduced dedicated processor memory, which can limit the useful working data set for parallelized processing. We present an implementation of the demons algorithm using the NVIDIA 8800 GTX GPU and the new CUDA programming language. The GPU performance will be compared with single threading and multithreading CPU implementations on an Intel dual core 2.4 GHz CPU using the C programming language. CUDA provides a C-like language programming interface, and allows for direct access to the highly parallel compute units in the GPU. Comparisons for volumetric clinical lung images acquired using 4DCT were carried out. Computation time for 100 iterations in the range of 1.8-13.5 s was observed for the GPU with image size ranging from 2.0 x 10(6) to 14.2 x 10(6) pixels. The GPU registration was 55-61 times faster than the CPU for the single threading implementation, and 34-39 times faster for the multithreading implementation. For CPU based computing, the computational time generally has a linear dependence on image size for medical imaging data. Computational efficiency is

  8. The High Trust Classroom: Raising Achievement from the Inside Out

    ERIC Educational Resources Information Center

    Moore, Lonnie

    2009-01-01

    This book provides a roadmap to developing a high-trust classroom, a classroom: (1) With increased student achievement; (2) With few discipline problems; (3) Where students are intrinsically motivated; and (4) Where the teacher can confidently use creative lesson planning. The author presents a simple step by step approach to earning the trust of…

  9. A Longitudinal Investigation of Project-Based Instruction and Student Achievement in High School Social Studies

    ERIC Educational Resources Information Center

    Summers, Emily J.; Dickinson, Gail

    2012-01-01

    This longitudinal study focused on how project-based instruction (PBI) influenced secondary social studies students' academic achievement and promoted College and Career Readiness (CCR). We explored and compared student achievement in a PBI high school versus a traditional instruction high school within the same rural school district. While…

  10. Federal Plan for High-End Computing. Report of the High-End Computing Revitalization Task Force (HECRTF)

    DTIC Science & Technology

    2004-07-01

    steadily for the past fifteen years, while memory latency and bandwidth have improved much more slowly. For example, Intel processor clock rates38 have... processor and memory performance) all greatly restrict the ability to achieve high levels of performance for science, engineering, and national...sub-nuclear distances. Guide experiments to identify transition from quantum chromodynamics to quark -gluon plasma. Accelerator Physics Accurate

  11. Peregrine System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a

  12. Expanding the Scope of High-Performance Computing Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uram, Thomas D.; Papka, Michael E.

    The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.

  13. Low and High Mathematics Achievement in Japanese, Chinese, and American Elementary-School Children.

    ERIC Educational Resources Information Center

    Uttal, David H.; And Others

    1988-01-01

    First and fifth grade students who scored high or low on a mathematics test were tested for intellectual ability and reading achievement. Students and their mothers were interviewed. Results indicated that factors associated with levels of achievement in mathematics operate in a similar fashion across three cultures that differ greatly in their…

  14. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  15. Highly-Parallel, Highly-Compact Computing Structures Implemented in Nanotechnology

    NASA Technical Reports Server (NTRS)

    Crawley, D. G.; Duff, M. J. B.; Fountain, T. J.; Moffat, C. D.; Tomlinson, C. D.

    1995-01-01

    In this paper, we describe work in which we are evaluating how the evolving properties of nano-electronic devices could best be utilized in highly parallel computing structures. Because of their combination of high performance, low power, and extreme compactness, such structures would have obvious applications in spaceborne environments, both for general mission control and for on-board data analysis. However, the anticipated properties of nano-devices mean that the optimum architecture for such systems is by no means certain. Candidates include single instruction multiple datastream (SIMD) arrays, neural networks, and multiple instruction multiple datastream (MIMD) assemblies.

  16. Homemade Buckeye-Pi: A Learning Many-Node Platform for High-Performance Parallel Computing

    NASA Astrophysics Data System (ADS)

    Amooie, M. A.; Moortgat, J.

    2017-12-01

    We report on the "Buckeye-Pi" cluster, the supercomputer developed in The Ohio State University School of Earth Sciences from 128 inexpensive Raspberry Pi (RPi) 3 Model B single-board computers. Each RPi is equipped with fast Quad Core 1.2GHz ARMv8 64bit processor, 1GB of RAM, and 32GB microSD card for local storage. Therefore, the cluster has a total RAM of 128GB that is distributed on the individual nodes and a flash capacity of 4TB with 512 processors, while it benefits from low power consumption, easy portability, and low total cost. The cluster uses the Message Passing Interface protocol to manage the communications between each node. These features render our platform the most powerful RPi supercomputer to date and suitable for educational applications in high-performance-computing (HPC) and handling of large datasets. In particular, we use the Buckeye-Pi to implement optimized parallel codes in our in-house simulator for subsurface media flows with the goal of achieving a massively-parallelized scalable code. We present benchmarking results for the computational performance across various number of RPi nodes. We believe our project could inspire scientists and students to consider the proposed unconventional cluster architecture as a mainstream and a feasible learning platform for challenging engineering and scientific problems.

  17. A computational examination of directional stability for smooth and chined forebodies at high-alpha

    NASA Technical Reports Server (NTRS)

    Ravi, Ramakrishnan; Mason, William H.

    1992-01-01

    Computational Fluid Dynamics (CFD) has been used to study aircraft forebody flowfields at low-speed, angle-of-attack conditions with sideslip. The purpose is to define forebody geometries which provide good directional stability characteristics under these conditions. The flows over the experimentally investigated F-5A forebody and chine type configuration, previously computed by the authors, were recomputed with better grid topology and resolution. The results were obtained using a modified version of CFL3D (developed at NASA Langley) to solve either the Euler equations or the Reynolds equations employing the Baldwin-Lomax turbulence model with the Degani-Schiff modification to account for massive crossflow separation. Based on the results, it is concluded that current CFD methods can be used to investigate the aerodynamic characteristics of forebodies to achieve desirable high angle-of-attack characteristics. An analytically defined generic forebody model is described, and a parametric study of various forebody shapes was then conducted to determine which shapes promote a positive contribution to directional stability at high angle-of-attack. An unconventional approach for presenting the results is used to illustrate how the positive contribution arises. Based on the results of this initial parametric study, some guidelines for aerodynamic design to promote positive directional stability are presented.

  18. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    NASA Technical Reports Server (NTRS)

    Kumar, Uttam; Nemani, Ramakrishna R.; Ganguly, Sangram; Kalia, Subodh; Michaelis, Andrew

    2017-01-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS-national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91 percent was achieved, which is a 6 percent improvement in unmixing based classification relative to per-pixel-based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  19. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.

    2017-12-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  20. Machine learning in computational biology to accelerate high-throughput protein expression.

    PubMed

    Sastry, Anand; Monk, Jonathan; Tegel, Hanna; Uhlen, Mathias; Palsson, Bernhard O; Rockberg, Johan; Brunk, Elizabeth

    2017-08-15

    The Human Protein Atlas (HPA) enables the simultaneous characterization of thousands of proteins across various tissues to pinpoint their spatial location in the human body. This has been achieved through transcriptomics and high-throughput immunohistochemistry-based approaches, where over 40 000 unique human protein fragments have been expressed in E. coli. These datasets enable quantitative tracking of entire cellular proteomes and present new avenues for understanding molecular-level properties influencing expression and solubility. Combining computational biology and machine learning identifies protein properties that hinder the HPA high-throughput antibody production pipeline. We predict protein expression and solubility with accuracies of 70% and 80%, respectively, based on a subset of key properties (aromaticity, hydropathy and isoelectric point). We guide the selection of protein fragments based on these characteristics to optimize high-throughput experimentation. We present the machine learning workflow as a series of IPython notebooks hosted on GitHub (https://github.com/SBRG/Protein_ML). The workflow can be used as a template for analysis of further expression and solubility datasets. ebrunk@ucsd.edu or johanr@biotech.kth.se. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  1. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    NASA Astrophysics Data System (ADS)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.

    2017-02-01

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.

  2. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE PAGES

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...

    2017-03-20

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  3. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  4. The Effect of Computer Assisted and Computer Based Teaching Methods on Computer Course Success and Computer Using Attitudes of Students

    ERIC Educational Resources Information Center

    Tosun, Nilgün; Suçsuz, Nursen; Yigit, Birol

    2006-01-01

    The purpose of this research was to investigate the effects of the computer-assisted and computer-based instructional methods on students achievement at computer classes and on their attitudes towards using computers. The study, which was completed in 6 weeks, were carried out with 94 sophomores studying in formal education program of Primary…

  5. High-performance conjugate-gradient benchmark: A new metric for ranking high-performance computing systems

    DOE PAGES

    Dongarra, Jack; Heroux, Michael A.; Luszczek, Piotr

    2015-08-17

    Here, we describe a new high-performance conjugate-gradient (HPCG) benchmark. HPCG is composed of computations and data-access patterns commonly found in scientific applications. HPCG strives for a better correlation to existing codes from the computational science domain and to be representative of their performance. Furthermore, HPCG is meant to help drive the computer system design and implementation in directions that will better impact future performance improvement.

  6. Using Large Data to Analyze the Effect of Learning Attitude for Cooperative Learning between the High Achievement Students and the Low Achievement Students

    ERIC Educational Resources Information Center

    Chia-Ling, Hsu; Ya-Fung, Chang

    2017-01-01

    This study is to investigate the effect of the cooperation learning between the high achievement students and the low achievement students. Nowadays, the influences of the flipped classroom are all over the world in the secondary school education. Therefore, the cooperative learning becomes hot teaching strategies again. However, the learning…

  7. Bringing Computational Thinking into the High School Science and Math Classroom

    NASA Astrophysics Data System (ADS)

    Trouille, Laura; Beheshti, E.; Horn, M.; Jona, K.; Kalogera, V.; Weintrop, D.; Wilensky, U.; University CT-STEM Project, Northwestern; University CenterTalent Development, Northwestern

    2013-01-01

    Computational thinking (for example, the thought processes involved in developing algorithmic solutions to problems that can then be automated for computation) has revolutionized the way we do science. The Next Generation Science Standards require that teachers support their students’ development of computational thinking and computational modeling skills. As a result, there is a very high demand among teachers for quality materials. Astronomy provides an abundance of opportunities to support student development of computational thinking skills. Our group has taken advantage of this to create a series of astronomy-based computational thinking lesson plans for use in typical physics, astronomy, and math high school classrooms. This project is funded by the NSF Computing Education for the 21st Century grant and is jointly led by Northwestern University’s Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), the Computer Science department, the Learning Sciences department, and the Office of STEM Education Partnerships (OSEP). I will also briefly present the online ‘Astro Adventures’ courses for middle and high school students I have developed through NU’s Center for Talent Development. The online courses take advantage of many of the amazing online astronomy enrichment materials available to the public, including a range of hands-on activities and the ability to take images with the Global Telescope Network. The course culminates with an independent computational research project.

  8. High Speed Computing, LANs, and WAMs

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A.; Monacos, Steve

    1994-01-01

    Optical fiber networks may one day offer potential capacities exceeding 10 terabits/sec. This paper describes present gigabit network techniques for distributed computing as illustrated by the CASA gigabit testbed, and then explores future all-optic network architectures that offer increased capacity, more optimized level of service for a given application, high fault tolerance, and dynamic reconfigurability.

  9. PuTTY | High-Performance Computing | NREL

    Science.gov Websites

    PuTTY PuTTY Learn how to use PuTTY to connect to NREL's high-performance computing (HPC) systems . Connecting When you start the PuTTY app, the program will display PuTTY's Configuration menu. When this comes . When prompted, type your password again followed by . Note: to increase

  10. "It's a Way of Life for Us": High Mobility and High Achievement in Department of Defense Schools.

    ERIC Educational Resources Information Center

    Smrekar, Claire E.; Owens, Debra E.

    2003-01-01

    Examines the academic performance of students in U.S. Department of Defense Education Activity (DoDEA) schools, which have high student mobility. Some observers contend that these students' high achievement is a function of their middle class family and community characteristics. Asserts that DoDEA schools simultaneously "do the right…

  11. Mo' Money, Mo' Problems? High-Achieving Black High School Students' Experiences with Resources, Racial Climate, and Resilience

    ERIC Educational Resources Information Center

    Allen, Walter; Griffin, Kimberly

    2006-01-01

    A multi-site case study analyzed the college preparatory processes of nine African American high achievers attending a well-resourced, suburban high school and eight academically successful African Americans attending a low-resourced urban school. Students at both schools experienced barriers, that is, racial climate and a lack of resources, that…

  12. Achieving reliability - The evolution of redundancy in American manned spacecraft computers

    NASA Technical Reports Server (NTRS)

    Tomayko, J. E.

    1985-01-01

    The Shuttle is the first launch system deployed by NASA with full redundancy in the on-board computer systems. Fault-tolerance, i.e., restoring to a backup with less capabilities, was the method selected for Apollo. The Gemini capsule was the first to carry a computer, which also served as backup for Titan launch vehicle guidance. Failure of the Gemini computer resulted in manual control of the spacecraft. The Apollo system served vehicle flight control and navigation functions. The redundant computer on Skylab provided attitude control only in support of solar telescope pointing. The STS digital, fly-by-wire avionics system requires 100 percent reliability. The Orbiter carries five general purpose computers, four being fully-redundant and the fifth being soley an ascent-descent tool. The computers are synchronized at input and output points at a rate of about six times a second. The system is projected to cause a loss of an Orbiter only four times in a billion flights.

  13. Teacher Support, Instructional Practices, Student Motivation, and Mathematics Achievement in High School

    ERIC Educational Resources Information Center

    Yu, Rongrong; Singh, Kusum

    2018-01-01

    The authors examined the relationships among teacher classroom practices, student motivation, and mathematics achievement in high school. The data for this study was drawn from the base-year data of High School Longitudinal Study of 2009. Structural equation modeling method was used to estimate the relationships among variables. The results…

  14. The impact of including children with intellectual disability in general education classrooms on the academic achievement of their low-, average-, and high-achieving peers.

    PubMed

    Sermier Dessemontet, Rachel; Bless, Gérard

    2013-03-01

    This study aimed at assessing the impact of including children with intellectual disability (ID) in general education classrooms with support on the academic achievement of their low-, average-, and high-achieving peers without disability. A quasi-experimental study was conducted with an experimental group of 202 pupils from classrooms with an included child with mild or moderate ID, and a control group of 202 pupils from classrooms with no included children with special educational needs (matched pairs sample). The progress of these 2 groups in their academic achievement was compared over a period of 1 school year. No significant difference was found in the progress of the low-, average-, or high-achieving pupils from classrooms with or without inclusion. The results suggest that including children with ID in primary general education classrooms with support does not have a negative impact on the progress of pupils without disability.

  15. Instructional, Transformational, and Managerial Leadership and Student Achievement: High School Principals Make a Difference

    ERIC Educational Resources Information Center

    Valentine, Jerry W.; Prater, Mike

    2011-01-01

    This statewide study examined the relationships between principal managerial, instructional, and transformational leadership and student achievement in public high schools. Differences in student achievement were found when schools were grouped according to principal leadership factors. Principal leadership behaviors promoting instructional and…

  16. The Use of Computers in High Schools. Technical Report Number Eight.

    ERIC Educational Resources Information Center

    Crick, Joe E.; Stolurow, Lawrence M.

    This paper reports on one high school's experience with a project to teach students how to program and solve problems in mathematics using a computer. Part I is intended as a general guide for any high school administrator or mathematics instructor who is interested in exploring the installation of a computer terminal in his high school and wants…

  17. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.

    PubMed

    Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V

    2014-07-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.

  18. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology

    PubMed Central

    Deodhar, Suruchi; Bisset, Keith R.; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V.

    2014-01-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity. PMID:25530914

  19. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  20. A Primer on High-Throughput Computing for Genomic Selection

    PubMed Central

    Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  1. The Computer Industry. High Technology Industries: Profiles and Outlooks.

    ERIC Educational Resources Information Center

    International Trade Administration (DOC), Washington, DC.

    A series of meetings was held to assess future problems in United States high technology, particularly in the fields of robotics, computers, semiconductors, and telecommunications. This report, which focuses on the computer industry, includes a profile of this industry and the papers presented by industry speakers during the meetings. The profile…

  2. A Research and Development Strategy for High Performance Computing.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    This report is the result of a systematic review of the status and directions of high performance computing and its relationship to federal research and development. Conducted by the Federal Coordinating Council for Science, Engineering, and Technology (FCCSET), the review involved a series of workshops attended by numerous computer scientists and…

  3. Striving for Excellence Sometimes Hinders High Achievers: Performance-Approach Goals Deplete Arithmetical Performance in Students with High Working Memory Capacity

    PubMed Central

    Crouzevialle, Marie; Smeding, Annique; Butera, Fabrizio

    2015-01-01

    We tested whether the goal to attain normative superiority over other students, referred to as performance-approach goals, is particularly distractive for high-Working Memory Capacity (WMC) students—that is, those who are used to being high achievers. Indeed, WMC is positively related to high-order cognitive performance and academic success, a record of success that confers benefits on high-WMC as compared to low-WMC students. We tested whether such benefits may turn out to be a burden under performance-approach goal pursuit. Indeed, for high achievers, aiming to rise above others may represent an opportunity to reaffirm their positive status—a stake susceptible to trigger disruptive outcome concerns that interfere with task processing. Results revealed that with performance-approach goals—as compared to goals with no emphasis on social comparison—the higher the students’ WMC, the lower their performance at a complex arithmetic task (Experiment 1). Crucially, this pattern appeared to be driven by uncertainty regarding the chances to outclass others (Experiment 2). Moreover, an accessibility measure suggested the mediational role played by status-related concerns in the observed disruption of performance. We discuss why high-stake situations can paradoxically lead high-achievers to sub-optimally perform when high-order cognitive performance is at play. PMID:26407097

  4. The Relation of High-Achieving Adolescents' Social Perceptions and Motivation to Teachers' Nominations for Advanced Programs

    ERIC Educational Resources Information Center

    Barber, Carolyn; Torney-Purta, Judith

    2008-01-01

    The discrepancies between test-based and teacher-based criteria of high achievement are well-documented for students of all ages. This study seeks to determine whether certain high school students who score high on tests of academic achievement are more likely than others to be nominated for advanced academic programs by their teachers. Using…

  5. Achieving High Resolution Timer Events in Virtualized Environment.

    PubMed

    Adamczyk, Blazej; Chydzinski, Andrzej

    2015-01-01

    Virtual Machine Monitors (VMM) have become popular in different application areas. Some applications may require to generate the timer events with high resolution and precision. This however may be challenging due to the complexity of VMMs. In this paper we focus on the timer functionality provided by five different VMMs-Xen, KVM, Qemu, VirtualBox and VMWare. Firstly, we evaluate resolutions and precisions of their timer events. Apparently, provided resolutions and precisions are far too low for some applications (e.g. networking applications with the quality of service). Then, using Xen virtualization we demonstrate the improved timer design that greatly enhances both the resolution and precision of achieved timer events.

  6. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 2 2012-01-01 2012-01-01 false High performance computers: Post... Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification...

  7. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false High performance computers: Post... Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification...

  8. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false High performance computers: Post... Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification...

  9. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 2 2013-01-01 2013-01-01 false High performance computers: Post... Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification...

  10. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false High performance computers: Post... Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS SPECIAL REPORTING AND NOTIFICATION § 743.2 High performance computers: Post shipment...

  11. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Yier

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from thismore » project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.« less

  12. High-Throughput Computing on High-Performance Platforms: A Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleynik, D; Panitkin, S; Matteo, Turilli

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i)more » a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.« less

  13. Assessment of computational issues associated with analysis of high-lift systems

    NASA Technical Reports Server (NTRS)

    Balasubramanian, R.; Jones, Kenneth M.; Waggoner, Edgar G.

    1992-01-01

    Thin-layer Navier-Stokes calculations for wing-fuselage configurations from subsonic to hypersonic flow regimes are now possible. However, efficient, accurate solutions for using these codes for two- and three-dimensional high-lift systems have yet to be realized. A brief overview of salient experimental and computational research is presented. An assessment of the state-of-the-art relative to high-lift system analysis and identification of issues related to grid generation and flow physics which are crucial for computational success in this area are also provided. Research in support of the high-lift elements of NASA's High Speed Research and Advanced Subsonic Transport Programs which addresses some of the computational issues is presented. Finally, fruitful areas of concentrated research are identified to accelerate overall progress for high lift system analysis and design.

  14. Challenges to achievement of metal sustainability in our high-tech society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izatt, Reed M.; Izatt, Steven R.; Bruening, Ronald L.

    Achievement of sustainability in metal life cycles from mining of virgin ore to consumer and industrial devices to end-of-life products requires greatly increased recycling and improved processing of metals. Electronic and other high-tech products containing precious, toxic, and specialty metals usually have short lifetimes and low recycling rates. Products containing these metals generally are incinerated, discarded as waste in landfills, or dismantled in informal recycling using crude and environmentally irresponsible procedures. Low metal recycling rates coupled with increasing demand for products containing them necessitate increased mining with attendant environmental, health, energy, water, and carbon-footprint consequences. In this tutorial review, challengesmore » to achieving metal sustainability in present high-tech society are presented; health, environmental, and economic incentives for various stakeholders to improve metal sustainability are discussed; a case for technical improvements in separations technology, especially employing molecular recognition, is given; and global consequences of continuing on the present path are examined.« less

  15. High performance transcription factor-DNA docking with GPU computing

    PubMed Central

    2012-01-01

    Background Protein-DNA docking is a very challenging problem in structural bioinformatics and has important implications in a number of applications, such as structure-based prediction of transcription factor binding sites and rational drug design. Protein-DNA docking is very computational demanding due to the high cost of energy calculation and the statistical nature of conformational sampling algorithms. More importantly, experiments show that the docking quality depends on the coverage of the conformational sampling space. It is therefore desirable to accelerate the computation of the docking algorithm, not only to reduce computing time, but also to improve docking quality. Methods In an attempt to accelerate the sampling process and to improve the docking performance, we developed a graphics processing unit (GPU)-based protein-DNA docking algorithm. The algorithm employs a potential-based energy function to describe the binding affinity of a protein-DNA pair, and integrates Monte-Carlo simulation and a simulated annealing method to search through the conformational space. Algorithmic techniques were developed to improve the computation efficiency and scalability on GPU-based high performance computing systems. Results The effectiveness of our approach is tested on a non-redundant set of 75 TF-DNA complexes and a newly developed TF-DNA docking benchmark. We demonstrated that the GPU-based docking algorithm can significantly accelerate the simulation process and thereby improving the chance of finding near-native TF-DNA complex structures. This study also suggests that further improvement in protein-DNA docking research would require efforts from two integral aspects: improvement in computation efficiency and energy function design. Conclusions We present a high performance computing approach for improving the prediction accuracy of protein-DNA docking. The GPU-based docking algorithm accelerates the search of the conformational space and thus increases the

  16. The Effect of Visual Cueing and Control Design on Children's Reading Achievement of Audio E-Books with Tablet Computers

    ERIC Educational Resources Information Center

    Wang, Pei-Yu; Huang, Chung-Kai

    2015-01-01

    This study aims to explore the impact of learner grade, visual cueing, and control design on children's reading achievement of audio e-books with tablet computers. This research was a three-way factorial design where the first factor was learner grade (grade four and six), the second factor was e-book visual cueing (word-based, line-based, and…

  17. Teaching Astronomy and Computation with Gaia: A New Curriculum for an Extra-curricular High School Program

    NASA Astrophysics Data System (ADS)

    Schwab, Ellianna; Faherty, Jacqueline K.; Barua, Prachurjya; Cooper, Ellie; Das, Debjani; Simone-Gonzalez, Luna; Sowah, Maxine; Valdez, Laura; BridgeUP: STEM

    2018-01-01

    BridgeUP: STEM (BridgeUP) is a program at the American Museum of Natural History (AMNH) that seeks to empower women by providing early-career scientists with research fellowships and high-school aged women with instruction in computer science and algorithmic methods. BridgeUP achieves this goal by employing post-baccalaureate women as Helen Fellows, who, in addition to conducting their own scientific research, mentor and teach high school students from the New York City area. The courses, targeted at early high-school students, are designed to teach algorithmic thinking and scientific methodology through the lens of computational science. In this poster we present the new BridgeUP astronomy curriculum created for 9th and 10th grade girls.The astronomy course we present is designed to introduce basic concepts as well as big data manipulation through a guided exploration of Gaia (DR1). Students learn about measuring astronomical distances through hands-on lab experiments illustrating the brightness/distance relationship, angular size calculations of the height of AMNH buildings, and in-depth Hertzsprung-Russell Diagram activities. Throughout these labs, students increase their proficiency in collecting and analyzing data, while learning to build and share code in teams. The students use their new skills to create color-color diagrams of known co-moving clusters (Oh et al. 2017) in the DR1 dataset using Python, Pandas and Matplotlib. We discuss the successes and lessons learned in the first implementation of this curriculum and show the preliminary work of six of the students, who are continuing with computational astronomy research over the current school year.

  18. Does Computer Use Matter? The Influence of Computer Usage on Eighth-Grade Students' Mathematics Reasoning

    ERIC Educational Resources Information Center

    Ayieko, Rachel A.; Gokbel, Elif N.; Nelson, Bryan

    2017-01-01

    This study uses the 2011 Trends in International Mathematics and Science Study to investigate the relationships among students' and teachers' computer use, and eighth-grade students' mathematical reasoning in three high-achieving nations: Finland, Chinese Taipei, and Singapore. The study found a significant negative relationship in all three…

  19. On the Achievable Throughput Over TVWS Sensor Networks

    PubMed Central

    Caleffi, Marcello; Cacciapuoti, Angela Sara

    2016-01-01

    In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565

  20. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  1. Evaluative and Behavioral Correlates to Intrarehearsal Achievement in High School Bands

    ERIC Educational Resources Information Center

    Montemayor, Mark

    2014-01-01

    The purpose of this study was to investigate relationships of teaching effectiveness, ensemble performance quality, and selected rehearsal procedures to various measures of intrarehearsal achievement (i.e., musical improvement exhibited by an ensemble during the course of a single rehearsal). Twenty-nine high school bands were observed in two…

  2. Impact of Physical Environment on Academic Achievement of High School Youth.

    ERIC Educational Resources Information Center

    Burkhalter, Bettye B.

    1983-01-01

    To study the relationship of the physical environment to high school students' academic achievement, 60 students participated in an experiential career exploration program at the Alabama Space and Rocket Center while 108 students participated in a traditional careers program. Tests indicated the former group improved more in career choice…

  3. High-Performance Computing: High-Speed Computer Networks in the United States, Europe, and Japan. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request from the Senate Committee on Commerce, Science, and Transportation, and from the House Committee on Science, Space, and Technology, for information on efforts to develop high-speed computer networks in the United States, Europe (limited to France, Germany, Italy, the Netherlands, and the United…

  4. Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing

    DTIC Science & Technology

    2006-11-01

    in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and

  5. Academic achievement and career choice in science: Perceptions of African American urban high school students

    NASA Astrophysics Data System (ADS)

    Jones, Sheila Kay

    2007-12-01

    Low test scores in science and fewer career choices in science among African American high school students than their White counterparts has resulted in lower interest during high school and an underrepresentation of African Americans in science and engineering fields. Reasons for this underachievement are not known. This qualitative study used a grounded theory methodology to examine what influence parental involvement, ethnic identity, and early mentoring had on the academic achievement in science and career choice in science of African American urban high school 10th grade students. Using semi-structured open-ended questions in individual interviews and focus groups, twenty participants responded to questions about African American urban high school student achievement in science and their career choice in science. The median age of participants was 15 years; 85% had passed either high school biology or physical science. The findings of the study revealed influences and interactions of selected factors on African American urban high school achievement in science. Sensing potential emerged as the overarching theme with six subthemes; A Taste of Knowledge, Sounds I Hear, Aromatic Barriers, What Others See, The Touch of Others, and The Sixth Sense. These themes correlate to the natural senses of the human body. A disconnect between what science is, their own individual learning and success, and what their participation in science could mean for them and the future of the larger society. Insight into appropriate intervention strategies to improve African American urban high school achievement in science was gained.

  6. Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru; VanDalsem, William (Technical Monitor)

    1994-01-01

    Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.

  7. Alliance for Computational Science Collaboration: HBCU Partnership at Alabama A&M University Continuing High Performance Computing Research and Education at AAMU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Xiaoqing; Deng, Z. T.

    2009-11-10

    This is the final report for the Department of Energy (DOE) project DE-FG02-06ER25746, entitled, "Continuing High Performance Computing Research and Education at AAMU". This three-year project was started in August 15, 2006, and it was ended in August 14, 2009. The objective of this project was to enhance high performance computing research and education capabilities at Alabama A&M University (AAMU), and to train African-American and other minority students and scientists in the computational science field for eventual employment with DOE. AAMU has successfully completed all the proposed research and educational tasks. Through the support of DOE, AAMU was able tomore » provide opportunities to minority students through summer interns and DOE computational science scholarship program. In the past three years, AAMU (1). Supported three graduate research assistants in image processing for hypersonic shockwave control experiment and in computational science related area; (2). Recruited and provided full financial support for six AAMU undergraduate summer research interns to participate Research Alliance in Math and Science (RAMS) program at Oak Ridge National Lab (ORNL); (3). Awarded highly competitive 30 DOE High Performance Computing Scholarships ($1500 each) to qualified top AAMU undergraduate students in science and engineering majors; (4). Improved high performance computing laboratory at AAMU with the addition of three high performance Linux workstations; (5). Conducted image analysis for electromagnetic shockwave control experiment and computation of shockwave interactions to verify the design and operation of AAMU-Supersonic wind tunnel. The high performance computing research and education activities at AAMU created great impact to minority students. As praised by Accreditation Board for Engineering and Technology (ABET) in 2009, ?The work on high performance computing that is funded by the Department of Energy provides scholarships to undergraduate students

  8. The effects of guided inquiry instruction on student achievement in high school biology

    NASA Astrophysics Data System (ADS)

    Vass, Laszlo

    The purpose of this quantitative, quasi-experimental study was to measure the effect of a student-centered instructional method called guided inquiry on the achievement of students in a unit of study in high school biology. The study used a non-random sample of 109 students, the control group of 55 students enrolled in high school one, received teacher centered instruction while the experimental group of 54 students enrolled at high school two received student-centered, guided inquiry instruction. The pretest-posttest design of the study analyzed scores using an independent t-test, a dependent t-test (p = <.001), an ANCOVA (p = .007), mixed method ANOVA (p = .024) and hierarchical linear regression (p = <.001). The experimental group that received guided inquiry instruction had statistically significantly higher achievement than the control group.

  9. Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu

    Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.

  10. Achieving High Resolution Timer Events in Virtualized Environment

    PubMed Central

    Adamczyk, Blazej; Chydzinski, Andrzej

    2015-01-01

    Virtual Machine Monitors (VMM) have become popular in different application areas. Some applications may require to generate the timer events with high resolution and precision. This however may be challenging due to the complexity of VMMs. In this paper we focus on the timer functionality provided by five different VMMs—Xen, KVM, Qemu, VirtualBox and VMWare. Firstly, we evaluate resolutions and precisions of their timer events. Apparently, provided resolutions and precisions are far too low for some applications (e.g. networking applications with the quality of service). Then, using Xen virtualization we demonstrate the improved timer design that greatly enhances both the resolution and precision of achieved timer events. PMID:26177366

  11. A First Attempt to Bring Computational Biology into Advanced High School Biology Classrooms

    PubMed Central

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S.

    2011-01-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors. PMID:22046118

  12. A first attempt to bring computational biology into advanced high school biology classrooms.

    PubMed

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S

    2011-10-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  13. The Effect of Technology Integration on High School Students' Literacy Achievement

    ERIC Educational Resources Information Center

    Robinson, Kara

    2016-01-01

    This literature review presents a critical appraisal of current research on the role technology integration plays in high school students' literacy achievement. It identifies the gaps within the research through comprehensive analysis. The review develops an argument that the use of laptops in secondary English classrooms has a significant impact…

  14. Fuzzy logic, neural networks, and soft computing

    NASA Technical Reports Server (NTRS)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial

  15. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  16. Generating high temperature tolerant transgenic plants: Achievements and challenges.

    PubMed

    Grover, Anil; Mittal, Dheeraj; Negi, Manisha; Lavania, Dhruv

    2013-05-01

    Production of plants tolerant to high temperature stress is of immense significance in the light of global warming and climate change. Plant cells respond to high temperature stress by re-programming their genetic machinery for survival and reproduction. High temperature tolerance in transgenic plants has largely been achieved either by over-expressing heat shock protein genes or by altering levels of heat shock factors that regulate expression of heat shock and non-heat shock genes. Apart from heat shock factors, over-expression of other trans-acting factors like DREB2A, bZIP28 and WRKY proteins has proven useful in imparting high temperature tolerance. Besides these, elevating the genetic levels of proteins involved in osmotic adjustment, reactive oxygen species removal, saturation of membrane-associated lipids, photosynthetic reactions, production of polyamines and protein biosynthesis process have yielded positive results in equipping transgenic plants with high temperature tolerance. Cyclic nucleotide gated calcium channel proteins that regulate calcium influxes across the cell membrane have recently been shown to be the key players in induction of high temperature tolerance. The involvement of calmodulins and kinases in activation of heat shock factors has been implicated as an important event in governing high temperature tolerance. Unfilled gaps limiting the production of high temperature tolerant transgenic plants for field level cultivation are discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Junior High Computer Studies: Teacher Resource Manual.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton. Curriculum Branch.

    This manual is designed to help classroom teachers in Alberta, Canada implement the Junior High Computer Studies Program. The first eight sections cover the following material: (1) introduction to the teacher resource manual; (2) program rationale and philosophy; (3) general learner expectations; (4) program framework and flexibility; (5) program…

  18. Moving to higher ground: Closing the high school science achievement gap

    NASA Astrophysics Data System (ADS)

    Mebane, Joyce Graham

    The purpose of this study was to examine the perceptions of West High School constituents (students, parents, teachers, administrators, and guidance counselors) about the readiness and interest of African American students at West High School to take Advanced Placement (AP) and International Baccalaureate (IB) science courses as a strategy for closing the achievement gap. This case study utilized individual interviews and questionnaires for data collection. The participants were selected biology students and their parents, teachers, administrators, and guidance counselors at West High School. The results of the study indicated that just over half the students and teachers, most parents, and all guidance counselors thought African American students were prepared to take AP science courses. Only one of the three administrators thought the students were prepared to take AP science courses. Between one-half and two-thirds of the students, parents, teachers, and administrators thought students were interested in taking an AP science course. Only two of the guidance counselors thought there was interest among the African American students in taking AP science courses. The general consensus among the constituents about the readiness and interest of African American students at West High School to take IB science courses was that it is too early in the process to really make definitive statements. West is a prospective IB school and the program is new and not yet in place. Educators at the West High School community must find reasons to expect each student to succeed. Lower expectations often translate into lower academic demands and less rigor in courses. Lower academic demands and less rigor in courses translate into less than adequate performance by students. When teachers and administrators maintain high expectations, they encourage students to aim high rather than slide by with mediocre effort (Lumsden, 1997). As a result of the study, the following suggestions should

  19. Computer Analysis Of High-Speed Roller Bearings

    NASA Technical Reports Server (NTRS)

    Coe, H.

    1988-01-01

    High-speed cylindrical roller-bearing analysis program (CYBEAN) developed to compute behavior of cylindrical rolling-element bearings at high speeds and with misaligned shafts. With program, accurate assessment of geometry-induced roller preload possible for variety of out-ring and housing configurations and loading conditions. Enables detailed examination of bearing performance and permits exploration of causes and consequences of bearing skew. Provides general capability for assessment of designs of bearings supporting main shafts of engines. Written in FORTRAN IV.

  20. Computer Science in High School Graduation Requirements. ECS Education Trends (Updated)

    ERIC Educational Resources Information Center

    Zinth, Jennifer

    2016-01-01

    Allowing high school students to fulfill a math or science high school graduation requirement via a computer science credit may encourage more student to pursue computer science coursework. This Education Trends report is an update to the original report released in April 2015 and explores state policies that allow or require districts to apply…

  1. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    ERIC Educational Resources Information Center

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  2. RAPPORT: running scientific high-performance computing applications on the cloud.

    PubMed

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  3. Development of a Very Dense Liquid Cooled Compute Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Phillip N.; Lipp, Robert J.

    2013-12-10

    The objective of this project was to design and develop a prototype very energy efficient high density compute platform with 100% pumped refrigerant liquid cooling using commodity components and high volume manufacturing techniques. Testing at SLAC has indicated that we achieved a DCIE of 0.93 against our original goal of 0.85. This number includes both cooling and power supply and was achieved employing some of the highest wattage processors available.

  4. Computer-aided design of high-frequency transistor amplifiers.

    NASA Technical Reports Server (NTRS)

    Hsieh, C.-C.; Chan, S.-P.

    1972-01-01

    A systematic step-by-step computer-aided procedure for designing high-frequency transistor amplifiers is described. The technique makes it possible to determine the optimum source impedance which gives a minimum noise figure.

  5. Consequences of the Confucian Culture: High Achievement but Negative Psychological Attributes?

    ERIC Educational Resources Information Center

    Ho, Irene T.; Hau, Kit-Tai

    2010-01-01

    In "Unforgiving Confucian culture: A breeding ground for high academic achievement, test anxiety and self-doubt?" Stankov (in press) provides three reasons for caution against over-glorifying the academic excellence of Confucian Asian learners, namely that it may lead to a reluctance to change their rote learning approach which is not conducive to…

  6. High-Stakes Testing and Student Achievement: Updated Analyses with NAEP Data

    ERIC Educational Resources Information Center

    Nichols, Sharon L.; Glass, Gene V.; Berliner, David C.

    2012-01-01

    The present research is a follow-up study of earlier published analyses that looked at the relationship between high-stakes testing pressure and student achievement in 25 states. Using the previously derived Accountability Pressure Index (APR) as a measure of state-level policy pressure for performance on standardized tests, a series of…

  7. Growing into Equity: Professional Learning and Personalization in High-Achieving Schools

    ERIC Educational Resources Information Center

    Gleason, Sonia Caus; Gerzon, Nancy

    2013-01-01

    What makes a Title I school high-achieving, and what can we all learn from that experience? Professional learning and leadership that supports personalized instruction makes the difference, as captured in the ground-breaking research of authors Sonia Caus Gleason and Nancy Gerzon. This illuminating book shows how four outstanding schools are…

  8. High School Success: An Effective Intervention for Achievement and Dropout Prevention

    ERIC Educational Resources Information Center

    Lowder, Christopher Michael

    2012-01-01

    The purpose of this mixed-design study was to use quantitative and qualitative research to explore the effects of High School Success (a course for at-risk ninth graders) and its effectiveness on student achievement, attendance, and dropout prevention. The research questions address whether there is a significant difference between at-risk ninth…

  9. A Lightweight, High-performance I/O Management Package for Data-intensive Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jun Wang

    2007-07-17

    File storage systems are playing an increasingly important role in high-performance computing as the performance gap between CPU and disk increases. It could take a long time to develop an entire system from scratch. Solutions will have to be built as extensions to existing systems. If new portable, customized software components are plugged into these systems, better sustained high I/O performance and higher scalability will be achieved, and the development cycle of next-generation of parallel file systems will be shortened. The overall research objective of this ECPI development plan aims to develop a lightweight, customized, high-performance I/O management package namedmore » LightI/O to extend and leverage current parallel file systems used by DOE. During this period, We have developed a novel component in LightI/O and prototype them into PVFS2, and evaluate the resultant prototype—extended PVFS2 system on data-intensive applications. The preliminary results indicate the extended PVFS2 delivers better performance and reliability to users. A strong collaborative effort between the PI at the University of Nebraska Lincoln and the DOE collaborators—Drs Rob Ross and Rajeev Thakur at Argonne National Laboratory who are leading the PVFS2 group makes the project more promising.« less

  10. Computer Literacy and the Construct Validity of a High-Stakes Computer-Based Writing Assessment

    ERIC Educational Resources Information Center

    Jin, Yan; Yan, Ming

    2017-01-01

    One major threat to validity in high-stakes testing is construct-irrelevant variance. In this study we explored whether the transition from a paper-and-pencil to a computer-based test mode in a high-stakes test in China, the College English Test, has brought about variance irrelevant to the construct being assessed in this test. Analyses of the…

  11. SCEC Earthquake System Science Using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.

    2008-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes

  12. Working Memory Load and Reminder Effect on Event-Based Prospective Memory of High- and Low-Achieving Students in Math.

    PubMed

    Chen, Youzhen; Lian, Rong; Yang, Lixian; Liu, Jianrong; Meng, Yingfang

    The effects of working memory (WM) demand and reminders on an event-based prospective memory (PM) task were compared between students with low and high achievement in math. WM load (1- and 2-back tasks) was manipulated as a within-subject factor and reminder (with or without reminder) as a between-subject factor. Results showed that high-achieving students outperformed low-achieving students on all PM and n-back tasks. Use of a reminder improved PM performance and thus reduced prospective interference; the performance of ongoing tasks also improved for all students. Both PM and n-back performances in low WM load were better than in high WM load. High WM load had more influence on low-achieving students than on high-achieving students. Results suggest that low-achieving students in math were weak at PM and influenced more by high WM load. Thus, it is important to train these students to set up an obvious reminder for their PM and improve their WM.

  13. Competence with Fractions Predicts Gains in Mathematics Achievement

    PubMed Central

    Bailey, Drew H.; Hoard, Mary K.; Nugent, Lara; Geary, David C.

    2012-01-01

    Competence with fractions predicts later mathematics achievement, but the co-developmental pattern between fractions knowledge and mathematics achievement is not well understood. We assessed this co-development through examination of the cross-lagged relation between a measure of conceptual knowledge of fractions and mathematics achievement in sixth and seventh grade (n = 212). The cross-lagged effects indicated that performance on the sixth grade fractions concepts measure predicted one year gains in mathematics achievement (β = .14, p<.01), controlling for the central executive component of working memory and intelligence, but sixth grade mathematics achievement did not predict gains on the fractions concepts measure (β = .03, p>.50). In a follow-up assessment, we demonstrated that measures of fluency with computational fractions significantly predicted seventh grade mathematics achievement above and beyond the influence of fluency in computational whole number arithmetic, performance on number fluency and number line tasks, and central executive span and intelligence. Results provide empirical support for the hypothesis that competence with fractions underlies, in part, subsequent gains in mathematics achievement. PMID:22832199

  14. Competence with fractions predicts gains in mathematics achievement.

    PubMed

    Bailey, Drew H; Hoard, Mary K; Nugent, Lara; Geary, David C

    2012-11-01

    Competence with fractions predicts later mathematics achievement, but the codevelopmental pattern between fractions knowledge and mathematics achievement is not well understood. We assessed this codevelopment through examination of the cross-lagged relation between a measure of conceptual knowledge of fractions and mathematics achievement in sixth and seventh grades (N=212). The cross-lagged effects indicated that performance on the sixth grade fractions concepts measure predicted 1-year gains in mathematics achievement (ß=.14, p<.01), controlling for the central executive component of working memory and intelligence, but sixth grade mathematics achievement did not predict gains on the fractions concepts measure (ß=.03, p>.50). In a follow-up assessment, we demonstrated that measures of fluency with computational fractions significantly predicted seventh grade mathematics achievement above and beyond the influence of fluency in computational whole number arithmetic, performance on number fluency and number line tasks, central executive span, and intelligence. Results provide empirical support for the hypothesis that competence with fractions underlies, in part, subsequent gains in mathematics achievement. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Appropriate Use Policy | High-Performance Computing | NREL

    Science.gov Websites

    users of the National Renewable Energy Laboratory (NREL) High Performance Computing (HPC) resources government agency, National Laboratory, University, or private entity, the intellectual property terms (if issued a multifactor token which may be a physical token or a virtual token used with one-time password

  16. High Performance Computing and Communications Panel Report.

    ERIC Educational Resources Information Center

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  17. Analysis of the Effects of the Computer Enhanced Classroom on the Achievement of Remedial High School Math Students.

    ERIC Educational Resources Information Center

    Lang, William Steve; And Others

    The effects of the use of computer-enhanced instruction with remedial students were assessed, using 4,293 ninth through twelfth graders--3,308 Black, 957 White, and 28 Other--involved in the Governor's Remediation Initiative (GRI) in Georgia. Data sources included the Comprehensive Tests of Basic Skills (CTBS), a data collection form developed for…

  18. A survey of CPU-GPU heterogeneous computing techniques

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-07-04

    As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less

  19. A survey of CPU-GPU heterogeneous computing techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less

  20. Mindmapping: Its effects on student achievement in high school biology

    NASA Astrophysics Data System (ADS)

    Cunningham, Glennis Edge

    The primary goal of schools is to promote the highest degree of learning possible. Yet teachers spend the majority of their time engaged in lecturing while students spend the majority of their time passively present (Cawelti, 1997, Grinder, 1991; Jackson & Davis, 2000; Jenkins, 1996). Helping students develop proficiency in learning, which translates into using that expertise to construct knowledge in subject domains, is a crucial goal of education. Students need exposure to teaching and learning practices that prepare them for both the classroom and their places in the future workforce (Ettinger, 1998; Longley, Goodchild, Maguire, & Rhind, 2001; NRC, 1996; Texley & Wild, 1996). The purpose of this study was to determine if achievement in high school science courses could be enhanced utilizing mindmapping. The subjects were primarily 9th and 10th graders (n = 147) at a suburban South Texas high school. A pretest-posttest control group design was selected to determine the effects of mindmapping on student achievement as measured by a teacher-developed, panel-validated instrument. Follow-up interviews were conducted with the teacher and a purposive sample of students (n = 7) to determine their perceptions of mindmapping and its effects on teaching and learning. Mindmapping is a strategy for visually displaying large amounts of conceptual, hierarchical information in a concise, organized, and accessible format. Mindmaps arrange information similar to that found on the traditional topic outline into colorful spatial displays that offer the user a view of the "forest" as well as the "trees" (Hyerle, 1996; Wandersee, 1990b). An independent samples t-test and a one-way analysis of covariance (ANCOVA) determined no significant difference in achievement between the groups. The experimental group improved in achievement at least as much as the control group. Several factors may have played a role in the lack of statistically significant results. These factors include the

  1. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    NASA Astrophysics Data System (ADS)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  2. Effects of Full-Time and Part-Time High-Ability Programs on Developments in Students' Achievement Emotions

    ERIC Educational Resources Information Center

    Hornstra, Lisette; van der Veen, Ineke; Peetsma, Thea

    2017-01-01

    This study focused on effects of high-ability programs on students' achievement emotions, i.e. emotions that students experience that are associated with achievement activities. Participants were students in grade 4-6 of primary education: 218 students attended full-time high-ability programs, 245 attended part-time high-ability programs (i.e.…

  3. The Impact of High-Speed Internet Connectivity at Home on Eighth-Grade Student Achievement

    ERIC Educational Resources Information Center

    Kingston, Kent J.

    2013-01-01

    In the fall of 2008 Westside Community Schools - District 66, in Omaha, Nebraska implemented a one-to-one notebook computer take home model for all eighth-grade students. The purpose of this study was to determine the effect of a required yearlong one-to-one notebook computer program supported by high-speed Internet connectivity at school on (a)…

  4. The Impact of Reading Success Academy on High School Reading Achievement

    ERIC Educational Resources Information Center

    Burlison, Kelly; Chave, Josh

    2014-01-01

    The study explores the effectiveness of the Reading Success Academy on the reading achievement of the selected group of ninth-grade students in a comprehensive high school. We examine in what ways the Reading Success Academy may improve the reading proficiency rates and amount of reading growth of ninth-grade students. The results indicate that…

  5. An Examination of High-Achieving First-Generation College Students from Low-Income Backgrounds

    ERIC Educational Resources Information Center

    Hébert, Thomas P.

    2018-01-01

    Experiences of 10 high-achieving first-generation college students from low-income backgrounds were the focus of this qualitative research study. Family adversity and difficult personal experiences during adolescence were major themes; however, students benefitted from emotionally supportive K-12 educators and academic rigor in high school.…

  6. The Relationship between Principals' Instructional Focus and Academic Achievement of High Poverty Students

    ERIC Educational Resources Information Center

    Aste, Mahri

    2009-01-01

    The purpose of the study was to determine the relationship between teacher perceptions of the frequency and effectiveness of principal instructional leadership behaviors and student achievement in high-poverty elementary schools. In order to accomplish the purpose, survey methodology was employed. Teachers from six high-poverty elementary schools…

  7. Scan Directed Load Balancing for Highly-Parallel Mesh-Connected Computers

    DTIC Science & Technology

    1991-07-01

    DTIC ~ ELECTE OCT 2 41991 AD-A242 045 Scan Directed Load Balancing for Highly-Parallel Mesh-Connected Computers’ Edoardo S. Biagioni Jan F. Prins...Department of Computer Science University of North Carolina Chapel Hill, N.C. 27599-3175 USA biagioni @cs.unc.edu prinsOcs.unc.edu Abstract Scan Directed...MasPar Computer Corpora- tion. Bibliography [1] Edoardo S. Biagioni . Scan Directed Load Balancing. PhD thesis., University of North Carolina, Chapel Hill

  8. High-resolution computer-aided moire

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  9. Data Security Policy | High-Performance Computing | NREL

    Science.gov Websites

    to use its high-performance computing (HPC) systems. NREL HPC systems are operated as research systems and may only contain data related to scientific research. These systems are categorized as low per sensitive or non-sensitive. One example of sensitive data would be personally identifiable information (PII

  10. High Performance Computing Operations Review Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cupps, Kimberly C.

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  11. The development of science achievement in middle and high school. Individual differences and school effects.

    PubMed

    Ma, Xin; Wilkins, Jesse L M

    2002-08-01

    Using data from the Longitudinal Study of American Youth (LSAY), hierarchical linear models (HLMs) were used to model the growth of student science achievement in three areas (biology, physical science, and environmental science) during middle and high school. Results showed significant growth in science achievement across all areas. The growth was quadratic across all areas, with rapid growth at the beginning grades of middle school but slow growth at the ending grades of high school. At the student level, socioeconomic status (SES) and age were related to the rate of growth in all areas. There were no gender differences in the rate of growth in any of the three areas. At the school level, variables associated with school context (school mean SES and school size) and variables associated with school climate (principal leadership, academic expectation, and teacher autonomy) were related to the growth in science achievement. Initial (Grade 7) status in science achievement was not associated with the rate of growth in science achievement among either students or schools in any of the three areas.

  12. The High-Potential Fast-Flying Achiever: Themes from the English Language Literature 1976-1995.

    ERIC Educational Resources Information Center

    Altman, Yochanan

    1997-01-01

    Review of business management literature from the United States, United Kingdom, and Canada identified the following: the images of high flyer, fast track, and high achiever; the meaning of success; emphasis on performance; corporate rites of passage; and opportunities for women to be high flyers. (SK)

  13. Analysis of Scientific Attitude, Computer Anxiety, Educational Internet Use, Problematic Internet Use, and Academic Achievement of Middle School Students According to Demographic Variables

    ERIC Educational Resources Information Center

    Bekmezci, Mehmet; Celik, Ismail; Sahin, Ismail; Kiray, Ahmet; Akturk, Ahmet Oguz

    2015-01-01

    In this research, students' scientific attitude, computer anxiety, educational use of the Internet, academic achievement, and problematic use of the Internet are analyzed based on different variables (gender, parents' educational level and daily access to the Internet). The research group involves 361 students from two middle schools which are…

  14. The Effect of the Time Management Art on Academic Achievement among High School Students in Jordan

    ERIC Educational Resources Information Center

    Al-Zoubi, Maysoon

    2016-01-01

    This study aimed at recognizing the effect of the Time Management Art on academic achievement among high school students in the Hashemite Kingdom of Jordan. The researcher employed the descriptive-analytic research to achieve the purpose of the study where he chose a sample of (2000) high school female and male students as respondents to the…

  15. The Role of Teachers at University: What Do High Achiever Students Look for?

    ERIC Educational Resources Information Center

    Monteiro, Silvia; Almeida, Leandro S.; Vasconcelos, Rosa M.

    2012-01-01

    The perceptions of students about their teachers have interested the academic and scientific community, regarding the improvement of the quality of higher education. This paper presents data obtained from interviews conducted with ten high achiever engineering students and focuses on the characteristics of teachers that are highly valued by the…

  16. Parallel Computing for Probabilistic Response Analysis of High Temperature Composites

    NASA Technical Reports Server (NTRS)

    Sues, R. H.; Lua, Y. J.; Smith, M. D.

    1994-01-01

    The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.

  17. Incremental Theory of Intelligence Moderated the Relationship between Prior Achievement and School Engagement in Chinese High School Students

    PubMed Central

    Li, Ping; Zhou, Nan; Zhang, Yuchi; Xiong, Qing; Nie, Ruihong; Fang, Xiaoyi

    2017-01-01

    School engagement plays a prominent role in promoting academic accomplishments. In contrast to the relative wealth of research that examined the impact of students’ school engagement on their academic achievement, considerably less research has investigated the effect of high school students’ prior achievement on their school engagement. The present study examined the relationship between prior achievement and school engagement among Chinese high school students. Based on the Dweck’s social-cognitive theory of motivation, we further examined the moderating effect of students’ theories of intelligence (TOIs) on this relationship. A total of 4036 (2066 girls) students from five public high school enrolled in grades 10 reported their high school entrance exam achievement in Chinese, Math and English, school engagement, and TOIs. Results showed that (a) students’ prior achievement predicted their behavioral, emotional, and cognitive engagement, respectively, and (b) the association between prior achievement and behavioral, emotional, and cognitive engagement is strong for students with an incremental theory but not for those with an entity theory in the emotional and cognitive engagement. These findings suggest that prior achievement and incremental theory were implicated in relation to adolescents’ school engagement. Implications and future research directions were discussed. PMID:29021772

  18. The effects of chronic achievement motivation and achievement primes on the activation of achievement and fun goals.

    PubMed

    Hart, William; Albarracín, Dolores

    2009-12-01

    This research examined the hypothesis that situational achievement cues can elicit achievement or fun goals depending on chronic differences in achievement motivation. In 4 studies, chronic differences in achievement motivation were measured, and achievement-denoting words were used to influence behavior. The effects of these variables were assessed on self-report inventories, task performance, task resumption following an interruption, and the pursuit of means relevant to achieving or having fun. Findings indicated that achievement priming (vs. control priming) activated a goal to achieve and inhibited a goal to have fun in individuals with chronically high-achievement motivation but activated a goal to have fun and inhibited a goal to achieve in individuals with chronically low-achievement motivation.

  19. The Effects of Chronic Achievement Motivation and Achievement Primes on the Activation of Achievement and Fun Goals

    PubMed Central

    Hart, William; Albarracín, Dolores

    2013-01-01

    This research examined the hypothesis that situational achievement cues can elicit achievement or fun goals depending on chronic differences in achievement motivation. In 4 studies, chronic differences in achievement motivation were measured, and achievement-denoting words were used to influence behavior. The effects of these variables were assessed on self-report inventories, task performance, task resumption following an interruption, and the pursuit of means relevant to achieving or having fun. Findings indicated that achievement priming (vs. control priming) activated a goal to achieve and inhibited a goal to have fun in individuals with chronically high-achievement motivation but activated a goal to have fun and inhibited a goal to achieve in individuals with chronically low-achievement motivation. PMID:19968423

  20. Achieving high coverage in Rwanda's national human papillomavirus vaccination programme.

    PubMed

    Binagwaho, Agnes; Wagner, Claire M; Gatera, Maurice; Karema, Corine; Nutt, Cameron T; Ngabo, Fidele

    2012-08-01

    Virtually all women who have cervical cancer are infected with the human papillomavirus (HPV). Of the 275,000 women who die from cervical cancer every year, 88% live in developing countries. Two vaccines against the HPV have been approved. However, vaccine implementation in low-income countries tends to lag behind implementation in high-income countries by 15 to 20 years. In 2011, Rwanda's Ministry of Health partnered with Merck to offer the Gardasil HPV vaccine to all girls of appropriate age. The Ministry formed a "public-private community partnership" to ensure effective and equitable delivery. Thanks to a strong national focus on health systems strengthening, more than 90% of all Rwandan infants aged 12-23 months receive all basic immunizations recommended by the World Health Organization. In 2011, Rwanda's HPV vaccination programme achieved 93.23% coverage after the first three-dose course of vaccination among girls in grade six. This was made possible through school-based vaccination and community involvement in identifying girls absent from or not enrolled in school. A nationwide sensitization campaign preceded delivery of the first dose. Through a series of innovative partnerships, Rwanda reduced the historical two-decade gap in vaccine introduction between high- and low-income countries to just five years. High coverage rates were achieved due to a delivery strategy that built on Rwanda's strong vaccination system and human resources framework. Following the GAVI Alliance's decision to begin financing HPV vaccination, Rwanda's example should motivate other countries to explore universal HPV vaccine coverage, although implementation must be tailored to the local context.

  1. The Effects of Computer Assisted English Instruction on High School Preparatory Students' Attitudes towards Computers and English

    ERIC Educational Resources Information Center

    Ates, Alev; Altunay, Ugur; Altun, Eralp

    2006-01-01

    The aim of this research was to discern the effects of computer assisted English instruction on English language preparatory students' attitudes towards computers and English in a Turkish-medium high school with an intensive English program. A quasi-experimental time series research design, also called "before-after" or "repeated…

  2. Computation Directorate Annual Report 2003

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, D L; McGraw, J R; Ashby, S F

    Big computers are icons: symbols of the culture, and of the larger computing infrastructure that exists at Lawrence Livermore. Through the collective effort of Laboratory personnel, they enable scientific discovery and engineering development on an unprecedented scale. For more than three decades, the Computation Directorate has supplied the big computers that enable the science necessary for Laboratory missions and programs. Livermore supercomputing is uniquely mission driven. The high-fidelity weapon simulation capabilities essential to the Stockpile Stewardship Program compel major advances in weapons codes and science, compute power, and computational infrastructure. Computation's activities align with this vital mission of the Departmentmore » of Energy. Increasingly, non-weapons Laboratory programs also rely on computer simulation. World-class achievements have been accomplished by LLNL specialists working in multi-disciplinary research and development teams. In these teams, Computation personnel employ a wide array of skills, from desktop support expertise, to complex applications development, to advanced research. Computation's skilled professionals make the Directorate the success that it has become. These individuals know the importance of the work they do and the many ways it contributes to Laboratory missions. They make appropriate and timely decisions that move the entire organization forward. They make Computation a leader in helping LLNL achieve its programmatic milestones. I dedicate this inaugural Annual Report to the people of Computation in recognition of their continuing contributions. I am proud that we perform our work securely and safely. Despite increased cyber attacks on our computing infrastructure from the Internet, advanced cyber security practices ensure that our computing environment remains secure. Through Integrated Safety Management (ISM) and diligent oversight, we address safety issues promptly and aggressively. The safety of our

  3. High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects

    NASA Technical Reports Server (NTRS)

    Schutt-Aine, Jose E.

    1996-01-01

    The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.

  4. Strategies to achieve high-solids enzymatic hydrolysis of dilute-acid pretreated corn stover.

    PubMed

    Geng, Wenhui; Jin, Yongcan; Jameel, Hasan; Park, Sunkyu

    2015-01-01

    Three strategies were presented to achieve high solids loading while maximizing carbohydrate conversion, which are fed-batch, splitting/thickening, and clarifier processes. Enzymatic hydrolysis was performed at water insoluble solids (WIS) of 15% using washed dilute-acid pretreated corn stover. The carbohydrate concentration increased from 31.8 to 99.3g/L when the insoluble solids content increased from 5% to 15% WIS, while the final carbohydrate conversion was decreased from 78.4% to 73.2%. For the fed-batch process, a carbohydrate conversion efficiency of 76.8% was achieved when solid was split into 60:20:20 ratio, with all enzymes added first. For the splitting/thickening process, a carbohydrate conversion of 76.5% was realized when the filtrate was recycled to simulate a steady-state process. Lastly, the clarifier process was evaluated and the highest carbohydrate conversion of 81.4% was achieved. All of these results suggests the possibility of enzymatic hydrolysis at high solids to make the overall conversion cost-competitive. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations

    NASA Technical Reports Server (NTRS)

    Dydson, Roger W.; Goodrich, John W.

    2000-01-01

    Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.

  6. Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena

    2010-09-30

    Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less

  7. Raising the stakes: How students' motivation for mathematics associates with high- and low-stakes test achievement.

    PubMed

    Simzar, Rahila M; Martinez, Marcela; Rutherford, Teomara; Domina, Thurston; Conley, AnneMarie M

    2015-04-01

    This study uses data from an urban school district to examine the relation between students' motivational beliefs about mathematics and high- versus low-stakes math test performance. We use ordinary least squares and quantile regression analyses and find that the association between students' motivation and test performance differs based on the stakes of the exam. Students' math self-efficacy and performance avoidance goal orientation were the strongest predictors for both exams; however, students' math self-efficacy was more strongly related to achievement on the low-stakes exam. Students' motivational beliefs had a stronger association at the low-stakes exam proficiency cutoff than they did at the high-stakes passing cutoff. Lastly, the negative association between performance avoidance goals and high-stakes performance showed a decreasing trend across the achievement distribution, suggesting that performance avoidance goals are more detrimental for lower achieving students. These findings help parse out the ways motivation influences achievement under different stakes.

  8. An Examination of Home, School, and Community Experiences of High-Achieving Deaf Adults

    ERIC Educational Resources Information Center

    Tanner, Kara Kunst

    2017-01-01

    This qualitative study investigated the academic, community, and family experiences of adults who are profoundly deaf. The deaf adults were categorized as high-achieving by having attended college post-high school. The intent of this study is to give teachers, parents, and other deaf students, insight into the factors responsible for contributing…

  9. The Relationship between Self-Efficacy and Achievement in At-Risk High School Students

    ERIC Educational Resources Information Center

    Gold, Jarrett Graham

    2010-01-01

    The focus of this quantitative survey study was the examination of the relationship between self-efficacy and academic achievement in 164 at-risk high school students. The study used Bandura's self-efficacy as the theoretical framework. The research questions involved understanding the levels of self-efficacy in at-risk high school students and…

  10. The Relationship between Thinking Style Differences and Career Choices for High-Achieving Students

    ERIC Educational Resources Information Center

    Kim, Mihyeon

    2011-01-01

    The intent of this study was to present information about high-achieving students' career decision making associated with thinking styles. We gathered data from two International Baccalaureate (IB) programs and a Governor's School Program with a sample of 209 high-school students. The findings of this study demonstrated that the effect of program…

  11. An Emerging Professional Identity: Influences on the Achievement of High-Ability First-Generation College Females

    ERIC Educational Resources Information Center

    Speirs Neumeister, Kristie L.; Rinker, Julie

    2006-01-01

    Using a qualitative interview design, this study examined factors contributing to the academic achievement of gifted first-generation college females. Findings indicated an emerging professional identity as the primary influence on achievement. The participants' high ability served as a passport to accessing coursework, extracurricular…

  12. Cohort versus Non-Cohort High School Students' Math Performance: Achievement Test Scores and Coursework

    ERIC Educational Resources Information Center

    Parke, Carol S.; Keener, Dana

    2011-01-01

    The purpose of this study is to compare multiple measures of mathematics achievement for 1,378 cohort students who attended the same high school in a district from 9th to 12th grade with non-cohort students in each grade level. Results show that mobility had an impact on math achievement. After accounting for gender, ethnicity, and SES, adjusted…

  13. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  14. Heterogeneity in High Math Achievement across Schools: Evidence from the American Mathematics Competitions. NBER Working Paper No. 18277

    ERIC Educational Resources Information Center

    Ellison, Glenn; Swanson, Ashley

    2012-01-01

    This paper explores differences in the frequency with which students from different schools reach high levels of math achievement. Data from the American Mathematics Competitions is used to produce counts of high-scoring students from more than two thousand public, coeducational, non-magnet, non-charter U.S. high schools. High-achieving students…

  15. User Account Passwords | High-Performance Computing | NREL

    Science.gov Websites

    Account Passwords User Account Passwords For NREL's high-performance computing (HPC) systems, learn about user account password requirements and how to set up, log in, and change passwords. Password Logging In the First Time After you request an HPC user account, you'll receive a temporary password. Set

  16. High-performance computing with quantum processing units

    DOE PAGES

    Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.; ...

    2017-03-01

    The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less

  17. High-performance computing with quantum processing units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.

    The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less

  18. Using Computer-Assisted Instruction to Enhance Achievement of English Language Learners

    ERIC Educational Resources Information Center

    Keengwe, Jared; Hussein, Farhan

    2014-01-01

    Computer-assisted instruction (CAI) in English-Language environments offer practice time, motivates students, enhance student learning, increase authentic materials that students can study, and has the potential to encourage teamwork between students. The findings from this particular study suggested that students who used computer assisted…

  19. Highly fault-tolerant parallel computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spielman, D.A.

    We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less

  20. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.