Sample records for accelerated strategic computing

  1. Cooperative high-performance storage in the accelerated strategic computing initiative

    NASA Technical Reports Server (NTRS)

    Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark

    1996-01-01

    The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klitsner, Tom

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  3. Delivering Insight The History of the Accelerated Strategic Computing Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larzelere II, A R

    2007-01-03

    The history of the Accelerated Strategic Computing Initiative (ASCI) tells of the development of computational simulation into a third fundamental piece of the scientific method, on a par with theory and experiment. ASCI did not invent the idea, nor was it alone in bringing it to fruition. But ASCI provided the wherewithal - hardware, software, environment, funding, and, most of all, the urgency - that made it happen. On October 1, 2005, the Initiative completed its tenth year of funding. The advances made by ASCI over its first decade are truly incredible. Lawrence Livermore, Los Alamos, and Sandia National Laboratories,more » along with leadership provided by the Department of Energy's Defense Programs Headquarters, fundamentally changed computational simulation and how it is used to enable scientific insight. To do this, astounding advances were made in simulation applications, computing platforms, and user environments. ASCI dramatically changed existing - and forged new - relationships, both among the Laboratories and with outside partners. By its tenth anniversary, despite daunting challenges, ASCI had accomplished all of the major goals set at its beginning. The history of ASCI is about the vision, leadership, endurance, and partnerships that made these advances possible.« less

  4. Corporate Training Delivery: Dollars and Sense. Unconventional Wisdom.

    ERIC Educational Resources Information Center

    Workforce Economics, 2001

    2001-01-01

    With accelerating technology in the workplace, worker training has become a key component of almost every corporation's long-range strategic plan. Almost all companies provide some form of training in computer operations to new and existing employees, and more than 90 percent of companies also provided a range of management, leadership, and…

  5. Applications of the Strategic Defense Initiative's compact accelerators

    NASA Technical Reports Server (NTRS)

    Montanarelli, Nick; Lynch, Ted

    1991-01-01

    The Strategic Defense Initiative's (SDI) investment in particle accelerator technology for its directed energy weapons program has produced breakthroughs in the size and power of new accelerators. These accelerators, in turn, have produced spinoffs in several areas: the radio frequency quadrupole linear accelerator (RFQ linac) was recently incorporated into the design of a cancer therapy unit at the Loma Linda University Medical Center, an SDI-sponsored compact induction linear accelerator may replace Cobalt-60 radiation and hazardous ethylene-oxide as a method for sterilizing medical products, and other SDIO-funded accelerators may be used to produce the radioactive isotopes oxygen-15, nitrogen-13, carbon-11, and fluorine-18 for positron emission tomography (PET). Other applications of these accelerators include bomb detection, non-destructive inspection, decomposing toxic substances in contaminated ground water, and eliminating nuclear waste.

  6. Modeling and Simulation of Explosively Driven Electromechanical Devices

    NASA Astrophysics Data System (ADS)

    Demmie, Paul N.

    2002-07-01

    Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.

  7. Rocky Mountain Research Station: 2008 Strategic Framework Update

    Treesearch

    Lane Eskew

    2009-01-01

    The Rocky Mountain Research Station's 2008 Strategic Framework Update is an addendum to the 2003 RMRS Strategic Framework. It focuses on critical natural resources research topics over the next five to 10 years when we will see continued, if not accelerated, socioeconomic and...

  8. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  9. Advanced Computer Aids in the Planning and Execution of Air Warfare and Ground Strike Operations: Conference Proceedings, Meeting of the Avionics Panels of AGARD (51st) Held in Kongsberg, Norway on 12-16 May 1986

    DTIC Science & Technology

    1986-02-01

    the area of Artificial Intelligence (At). DARPA’s Strategic Computing Program 13 developing an At ýtchnology base upon which several applications...technologies with the Strategic Computing Program . In late 1983 the Strategic Computing Program (SCP) wes announced. The program was organizsd to develop...solving a resource allocation problem. The remainder of this paper will discuss the TEMPLAR progeam as it relates to the Strategic Computing Program

  10. Accelerated construction

    DOT National Transportation Integrated Search

    2004-01-01

    Accelerated Construction Technology Transfer (ACTT) is a strategic process that uses various innovative techniques, strategies, and technologies to minimize actual construction time, while enhancing quality and safety on today's large, complex multip...

  11. Operationalizing Anticipatory Governance

    DTIC Science & Technology

    2011-09-01

    ward to known events. They provide a means to test in the mind, or in a virtual setting, what we might otherwise have to try in reality . Other...process that can be used to correct our strategic myopia and secure America’s global place in the 21st century. Acceleration and Complexity Our era is...process that can be used to correct our strategic myopia and secure America’s global place in the 21st century. Acceleration and Complexity Our era

  12. Information Architecture for Quality Management Support in Hospitals.

    PubMed

    Rocha, Álvaro; Freixo, Jorge

    2015-10-01

    Quality Management occupies a strategic role in organizations, and the adoption of computer tools within an aligned information architecture facilitates the challenge of making more with less, promoting the development of a competitive edge and sustainability. A formal Information Architecture (IA) lends organizations an enhanced knowledge but, above all, favours management. This simplifies the reinvention of processes, the reformulation of procedures, bridging and the cooperation amongst the multiple actors of an organization. In the present investigation work we planned the IA for the Quality Management System (QMS) of a Hospital, which allowed us to develop and implement the QUALITUS (QUALITUS, name of the computer application developed to support Quality Management in a Hospital Unit) computer application. This solution translated itself in significant gains for the Hospital Unit under study, accelerating the quality management process and reducing the tasks, the number of documents, the information to be filled in and information errors, amongst others.

  13. Linking Student Engagement and Strategic Initiatives: Using NSSE Results to Inform Campus Action

    ERIC Educational Resources Information Center

    Doherty, Kathryn

    2007-01-01

    Towson University (TU) is in a period of growth in both students and facilities. To guide this growth, TU relies on its strategic plan, Towson 2010, to focus its strategic decisions through 2010. Release of the National Survey of Student Engagement (NSSE) data for 2005 coincided with a call for academic excellence and accelerated growth at Towson…

  14. Strategic Computing Computer Vision: Taking Image Understanding To The Next Plateau

    NASA Astrophysics Data System (ADS)

    Simpson, R. L., Jr.

    1987-06-01

    The overall objective of the Strategic Computing (SC) Program of the Defense Advanced Research Projects Agency (DARPA) is to develop and demonstrate a new generation of machine intelligence technology which can form the basis for more capable military systems in the future and also maintain a position of world leadership for the US in computer technology. Begun in 1983, SC represents a focused research strategy for accelerating the evolution of new technology and its rapid prototyping in realistic military contexts. Among the very ambitious demonstration prototypes being developed within the SC Program are: 1) the Pilot's Associate which will aid the pilot in route planning, aerial target prioritization, evasion of missile threats, and aircraft emergency safety procedures during flight; 2) two battle management projects one for the for the Army, which is just getting started, called the AirLand Battle Management program (ALBM) which will use knowledge-based systems technology to assist in the generation and evaluation of tactical options and plans at the Corps level; 3) the other more established program for the Navy is the Fleet Command Center Battle Management Program (FCCBIVIP) at Pearl Harbor. The FCCBMP is employing knowledge-based systems and natural language technology in a evolutionary testbed situated in an operational command center to demonstrate and evaluate intelligent decision-aids which can assist in the evaluation of fleet readiness and explore alternatives during contingencies; and 4) the Autonomous Land Vehicle (ALV) which integrates in a major robotic testbed the technologies for dynamic image understanding, knowledge-based route planning with replanning during execution, hosted on new advanced parallel architectures. The goal of the Strategic Computing computer vision technology base (SCVision) is to develop generic technology that will enable the construction of complete, robust, high performance image understanding systems to support a wide range of DoD applications. Possible applications include autonomous vehicle navigation, photointerpretation, smart weapons, and robotic manipulation. This paper provides an overview of the technical and program management plans being used in evolving this critical national technology.

  15. Costa - Introduction to 2015 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, James E.

    In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less

  16. Military research needs in biomedical informatics.

    PubMed

    Reifman, Jaques; Gilbert, Gary R; Fagan, Lawrence; Satava, Richard

    2002-01-01

    The 2001 U.S. Army Medical Research and Materiel Command (USAMRMC) Biomedical Informatics Roadmap Meeting was devoted to developing a strategic plan in four focus areas: Hospital and Clinical Informatics, E-Health, Combat Health Informatics, and Bioinformatics and Biomedical Computation. The driving force of this Roadmap Meeting was the recent accelerated pace of change in biomedical informatics in which emerging technologies have the potential to affect significantly the Army research portfolio and investment strategy in these focus areas. The meeting was structured so that the first two days were devoted to presentations from experts in the field, including representatives from the three services, other government agencies, academia, and the private sector, and the morning of the last day was devoted to capturing specific biomedical informatics research needs in the four focus areas. This white paper summarizes the key findings and recommendations and should be a powerful tool for the crafting of future requests for proposals to help align USAMRMC new strategic research investments with new developments and emerging technologies.

  17. Accelerating Clean Energy Commercialization. A Strategic Partnership Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Richard; Pless, Jacquelyn; Arent, Douglas J.

    Technology development in the clean energy and broader clean tech space has proven to be challenging. Long-standing methods for advancing clean energy technologies from science to commercialization are best known for relatively slow, linear progression through research and development, demonstration, and deployment (RDD&D); and characterized by well-known valleys of death for financing. Investment returns expected by traditional venture capital investors have been difficult to achieve, particularly for hardware-centric innovations, and companies that are subject to project finance risks. Commercialization support from incubators and accelerators has helped address these challenges by offering more support services to start-ups; however, more effort ismore » needed to fulfill the desired clean energy future. The emergence of new strategic investors and partners in recent years has opened up innovative opportunities for clean tech entrepreneurs, and novel commercialization models are emerging that involve new alliances among clean energy companies, RDD&D, support systems, and strategic customers. For instance, Wells Fargo and Company (WFC) and the National Renewable Energy Laboratory (NREL) have launched a new technology incubator that supports faster commercialization through a focus on technology development. The incubator combines strategic financing, technology and technical assistance, strategic customer site validation, and ongoing financial support.« less

  18. Product-market differentiation: a strategic planning model for community hospitals.

    PubMed

    Milch, R A

    1980-01-01

    Community hospitals would seem to have every reason to identify and capitalize on their product-market strengths. The strategic marketing/planning model provides a framework for rational analysis of the community hospital dilemma and for developing sensible solutions to the complex problems of accelerating hospital price-inflation.

  19. Strategic Flexibility in Computational Estimation for Chinese- and Canadian-Educated Adults

    ERIC Educational Resources Information Center

    Xu, Chang; Wells, Emma; LeFevre, Jo-Anne; Imbo, Ineke

    2014-01-01

    The purpose of the present study was to examine factors that influence strategic flexibility in computational estimation for Chinese- and Canadian-educated adults. Strategic flexibility was operationalized as the percentage of trials on which participants chose the problem-based procedure that best balanced proximity to the correct answer with…

  20. Article by LTG Qi Jianguo on International Security Affairs

    DTIC Science & Technology

    2013-04-01

    jiaodian; 战 略 争 夺焦点) was Europe. After the Cold War, the Middle East became the focus of strategic 7 competition. At present, the focus of global...said to be unprecedented.” The important strategic thought (zhongda zhanlue sixiang; 重大 战 略思想) of a “great changing situation” (dabianju; 大变 局) is a...is the strategic concept (zhanlue gouxiang; 战 略构想) of accelerating the promotion of the process of multi-polarity. The great changing situation

  1. A Conceptual Model of a Research Design about Congruence between Environmental Turbulence, Strategic Aggressiveness, and General Management Capability in Community Colleges

    ERIC Educational Resources Information Center

    Lewis, Alfred

    2013-01-01

    Numerous studies have examined the determinant strategic elements that affect the performance of organizations. These studies have increasing relevance to community colleges because of the accelerating pace of change in enrollment, resource availability, leadership turnover, and demand for service that these institutions are experiencing. The…

  2. Strategic Control Algorithm Development : Volume 3. Strategic Algorithm Report.

    DOT National Transportation Integrated Search

    1974-08-01

    The strategic algorithm report presents a detailed description of the functional basic strategic control arrival algorithm. This description is independent of a particular computer or language. Contained in this discussion are the geometrical and env...

  3. Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development and Application to Critical Problems in Defense

    DTIC Science & Technology

    1983-10-28

    Computing. By seizing an opportunity to leverage recent advances in artificial intelligence, computer science, and microelectronics, the Agency plans...occurred in many separated areas of artificial intelligence, computer science, and microelectronics. Advances in "expert system" technology now...and expert knowledge o Advances in Artificial Intelligence: Mechanization of speech recognition, vision, and natural language understanding. o

  4. Collaborative Strategic Board Games as a Site for Distributed Computational Thinking

    ERIC Educational Resources Information Center

    Berland, Matthew; Lee, Victor R.

    2011-01-01

    This paper examines the idea that contemporary strategic board games represent an informal, interactional context in which complex computational thinking takes place. When games are collaborative--that is, a game requires that players work in joint pursuit of a shared goal--the computational thinking is easily observed as distributed across…

  5. Computer Simulation in Mass Emergency and Disaster Response: An Evaluation of Its Effectiveness as a Tool for Demonstrating Strategic Competency in Emergency Department Medical Responders

    ERIC Educational Resources Information Center

    O'Reilly, Daniel J.

    2011-01-01

    This study examined the capability of computer simulation as a tool for assessing the strategic competency of emergency department nurses as they responded to authentically computer simulated biohazard-exposed patient case studies. Thirty registered nurses from a large, urban hospital completed a series of computer-simulated case studies of…

  6. Cloud computing strategic framework (FY13 - FY15).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arellano, Lawrence R.; Arroyo, Steven C.; Giese, Gerald J.

    This document presents an architectural framework (plan) and roadmap for the implementation of a robust Cloud Computing capability at Sandia National Laboratories. It is intended to be a living document and serve as the basis for detailed implementation plans, project proposals and strategic investment requests.

  7. Molecular dynamics simulations through GPU video games technologies

    PubMed Central

    Loukatou, Styliani; Papageorgiou, Louis; Fakourelis, Paraskevas; Filntisi, Arianna; Polychronidou, Eleftheria; Bassis, Ioannis; Megalooikonomou, Vasileios; Makałowski, Wojciech; Vlachakis, Dimitrios; Kossida, Sophia

    2016-01-01

    Bioinformatics is the scientific field that focuses on the application of computer technology to the management of biological information. Over the years, bioinformatics applications have been used to store, process and integrate biological and genetic information, using a wide range of methodologies. One of the most de novo techniques used to understand the physical movements of atoms and molecules is molecular dynamics (MD). MD is an in silico method to simulate the physical motions of atoms and molecules under certain conditions. This has become a state strategic technique and now plays a key role in many areas of exact sciences, such as chemistry, biology, physics and medicine. Due to their complexity, MD calculations could require enormous amounts of computer memory and time and therefore their execution has been a big problem. Despite the huge computational cost, molecular dynamics have been implemented using traditional computers with a central memory unit (CPU). A graphics processing unit (GPU) computing technology was first designed with the goal to improve video games, by rapidly creating and displaying images in a frame buffer such as screens. The hybrid GPU-CPU implementation, combined with parallel computing is a novel technology to perform a wide range of calculations. GPUs have been proposed and used to accelerate many scientific computations including MD simulations. Herein, we describe the new methodologies developed initially as video games and how they are now applied in MD simulations. PMID:27525251

  8. Strategic Control Algorithm Development : Volume 4A. Computer Program Report.

    DOT National Transportation Integrated Search

    1974-08-01

    A description of the strategic algorithm evaluation model is presented, both at the user and programmer levels. The model representation of an airport configuration, environmental considerations, the strategic control algorithm logic, and the airplan...

  9. Strategic Control Algorithm Development : Volume 4B. Computer Program Report (Concluded)

    DOT National Transportation Integrated Search

    1974-08-01

    A description of the strategic algorithm evaluation model is presented, both at the user and programmer levels. The model representation of an airport configuration, environmental considerations, the strategic control algorithm logic, and the airplan...

  10. National Strategic Computing Initiative Strategic Plan

    DTIC Science & Technology

    2016-07-01

    23 A.6 National Nanotechnology Initiative...Initiative: https://www.nitrd.gov/nitrdgroups/index.php?title=Big_Data_(BD_SSG)  National Nanotechnology Initiative: http://www.nano.gov  Precision...computing. While not limited to neuromorphic technologies, the National Nanotechnology Initiative’s first Grand Challenge seeks to achieve brain

  11. Start with the End in Mind: Experiences of Accelerated Course Completion by Pre-Service Teachers and Educators

    ERIC Educational Resources Information Center

    Collins, Anita; Hay, Iain; Heiner, Irmgard

    2013-01-01

    In response to changes government funding and policies over the past five years, the Australian tertiary sector has entered an increasingly competitive climate. This has forced many universities to become more strategic in attracting increased numbers of PSTs. Providing accelerated learning opportunities for PSTs is viewed as one way to gain…

  12. Establishing a nursing strategic agenda: the whys and wherefores.

    PubMed

    Young, Claire

    2008-01-01

    The health system nursing leader is responsible for providing high-quality, service-oriented nursing care to deliver such care with disciplined cost management; to lead and develop a group of nursing executives and managers at the facility level; to establish nursing professional development programs; to build and maintain an effective supply of nurses; and to advocate for nurses and patients. Balancing these imperatives requires thoughtful strategic planning and disciplined execution. In their absence, organizations flounder to address a single problem in isolation and struggle to perform against outcomes. One organization approached the challenge by engaging in a comprehensive, accelerated strategic planning process. The experience brought together 11 hospital nursing executives in consensus around a prioritized strategic agenda. This article is a case study of the approach used to define a nursing agenda.

  13. Architectural requirements for the Red Storm computing system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camp, William J.; Tomkins, James Lee

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latencymore » interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.« less

  14. A Quantitative Study of the Relationship between Leadership Practice and Strategic Intentions to Use Cloud Computing

    ERIC Educational Resources Information Center

    Castillo, Alan F.

    2014-01-01

    The purpose of this quantitative correlational cross-sectional research study was to examine a theoretical model consisting of leadership practice, attitudes of business process outsourcing, and strategic intentions of leaders to use cloud computing and to examine the relationships between each of the variables respectively. This study…

  15. Pennsylvania's Transition to Enterprise Computing as a Study in Strategic Alignment

    ERIC Educational Resources Information Center

    Sawyer, Steve; Hinnant, Charles C.; Rizzuto, Tracey

    2008-01-01

    We theorize about the strategic alignment of computing with organizational mission, using the Commonwealth of Pennsylvania's efforts to pursue digital government initiatives as evidence. To do this we draw on a decade (1995-2004) of changes in Pennsylvania to characterize how a state government shifts from an organizational to an enterprise…

  16. Profiles of Motivated Self-Regulation in College Computer Science Courses: Differences in Major versus Required Non-Major Courses

    NASA Astrophysics Data System (ADS)

    Shell, Duane F.; Soh, Leen-Kiat

    2013-12-01

    The goal of the present study was to utilize a profiling approach to understand differences in motivation and strategic self-regulation among post-secondary STEM students in major versus required non-major computer science courses. Participants were 233 students from required introductory computer science courses (194 men; 35 women; 4 unknown) at a large Midwestern state university. Cluster analysis identified five profiles: (1) a strategic profile of a highly motivated by-any-means good strategy user; (2) a knowledge-building profile of an intrinsically motivated autonomous, mastery-oriented student; (3) a surface learning profile of a utility motivated minimally engaged student; (4) an apathetic profile of an amotivational disengaged student; and (5) a learned helpless profile of a motivated but unable to effectively self-regulate student. Among CS majors and students in courses in their major field, the strategic and knowledge-building profiles were the most prevalent. Among non-CS majors and students in required non-major courses, the learned helpless, surface learning, and apathetic profiles were the most prevalent. Students in the strategic and knowledge-building profiles had significantly higher retention of computational thinking knowledge than students in other profiles. Students in the apathetic and surface learning profiles saw little instrumentality of the course for their future academic and career objectives. Findings show that students in STEM fields taking required computer science courses exhibit the same constellation of motivated strategic self-regulation profiles found in other post-secondary and K-12 settings.

  17. Strategic research in the social sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bainbridge, W.S.

    1995-12-31

    The federal government has identified a number of multi-agency funding initiatives for science in strategic areas, such as the initiatives on global environmental change and high performance computing, that give some role to the social sciences. Seven strategic areas for social science research are given with potential for federal funding: (1) Democratization. (2) Human Capital. (3) Administrative Science. (4) Cognitive Science. (5) High Performance Computing and Digital Libraries. (6) Human Dimensions of Environmental Change. and (7) Human Genetic Diversity. The first two are addressed in detail and the remainder as a group. 10 refs.

  18. Strategic Imagination: The Lost Dimension of Strategic Studies.

    DTIC Science & Technology

    1984-09-01

    the advent of computer technology brought about not only an increased usage of gaming techniques, but also broadened the spectrum of prob- lems and...direct relevance for the use of experts as advisors in decision-making, especially in areas of broad or long-range policy formulation. It is useful for...and the Anti Submarine Warfare trainer in Norfolk. 5. Computer Assisted Games The advent of computers opened many new possibili- ties for scenario

  19. Controlling flexible robot arms using a high speed dynamics process

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan (Inventor); Rodriguez, Guillermo (Inventor)

    1992-01-01

    Described here is a robot controller for a flexible manipulator arm having plural bodies connected at respective movable hinges, and flexible in plural deformation modes. It is operated by computing articulated body qualities for each of the bodies from the respective modal spatial influence vectors, obtaining specified body forces for each of the bodies, and computing modal deformation accelerations of the nodes and hinge accelerations of the hinges from the specified body forces, from the articulated body quantities and from the modal spatial influence vectors. In one embodiment of the invention, the controller further operates by comparing the accelerations thus computed to desired manipulator motion to determine a motion discrepancy, and correcting the specified body forces so as to reduce the motion discrepancy. The manipulator bodies and hinges are characterized by respective vectors of deformation and hinge configuration variables. Computing modal deformation accelerations and hinge accelerations is carried out for each of the bodies, beginning with the outermost body by computing a residual body force from a residual body force of a previous body, computing a resultant hinge acceleration from the body force, and then, for each one of the bodies beginning with the innermost body, computing a modal body acceleration from a modal body acceleration of a previous body, computing a modal deformation acceleration and hinge acceleration from the resulting hinge acceleration and from the modal body acceleration.

  20. Computing Nash equilibria through computational intelligence methods

    NASA Astrophysics Data System (ADS)

    Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.

    2005-03-01

    Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.

  1. Assessing the Relationships among Cloud Adoption, Strategic Alignment and Information Technology Effectiveness

    ERIC Educational Resources Information Center

    Chebrolu, Shankar Babu

    2010-01-01

    Against the backdrop of new economic realities, one of the larger forces that is affecting businesses worldwide is cloud computing, whose benefits include agility, time to market, time to capability, reduced cost, renewed focus on the core and strategic partnership with the business. Cloud computing can potentially transform a majority of the…

  2. Developing Strategic and Reasoning Abilities with Computer Games at Primary School Level

    ERIC Educational Resources Information Center

    Bottino, R. M.; Ferlino, L.; Ott, M.; Tavella, M.

    2007-01-01

    The paper reports a small-scale, long-term pilot project designed to foster strategic and reasoning abilities in young primary school pupils by engaging them in a number of computer games, mainly those usually called mind games (brainteasers, puzzlers, etc.). In this paper, the objectives, work methodology, experimental setting, and tools used in…

  3. Strategic Reading, Ontologies, and the Future of Scientific Publishing

    NASA Astrophysics Data System (ADS)

    Renear, Allen H.; Palmer, Carole L.

    2009-08-01

    The revolution in scientific publishing that has been promised since the 1980s is about to take place. Scientists have always read strategically, working with many articles simultaneously to search, filter, scan, link, annotate, and analyze fragments of content. An observed recent increase in strategic reading in the online environment will soon be further intensified by two current trends: (i) the widespread use of digital indexing, retrieval, and navigation resources and (ii) the emergence within many scientific disciplines of interoperable ontologies. Accelerated and enhanced by reading tools that take advantage of ontologies, reading practices will become even more rapid and indirect, transforming the ways in which scientists engage the literature and shaping the evolution of scientific publishing.

  4. Journey to the 21st Century. A Summary of OCLC's Strategic Plan.

    ERIC Educational Resources Information Center

    OCLC Online Computer Library Center, Inc., Dublin, OH.

    This report on some of the strategic planning decisions that OCLC has made for the 21st century begins by describing the evolution of OCLC from a pioneer in the computer revolution with its Online Union Catalog and Shared Cataloging System in 1971 to a system that currently has nearly 60 distinct offerings. Corresponding computer and…

  5. Advanced Ceramic-Metallic Composites for Lightweight Vehicle Braking Systems

    DOT National Transportation Integrated Search

    2012-09-11

    According to the Federal Transit Administration Strategic Research Plan [1]: Researching technologies to reduce vehicle weight can also lead to important reductions in fuel consumption and emissions. The power required to accelerate a bus and over...

  6. A strategic plan to accelerate development of acute stroke treatments.

    PubMed

    Marler, John R

    2012-09-01

    In order to reenergize acute stroke research and accelerate the development of new treatments, we need to transform the usual design and conduct of clinical trials to test for small but significant improvements in effectiveness, and treat patients as soon as possible after stroke onset when treatment effects are most detectable. This requires trials that include thousands of acute stroke patients. A plan to make these trials possible is proposed. There are four components: (1) free access to the electronic medical record; (2) a large stroke emergency network and clinical trial coordinating center connected in real time to hundreds of emergency departments; (3) a clinical trial technology development center; and (4) strategic leadership to raise funds, motivate clinicians to participate, and interact with politicians, insurers, legislators, and other national and international organizations working to advance the quality of stroke care. © 2012 New York Academy of Sciences.

  7. Checkpointing for a hybrid computing node

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cher, Chen-Yong

    2016-03-08

    According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.

  8. Leading Strategic & Cultural Change through Technology. Proceedings of the Association of Small Computer Users in Education (ASCUE) Annual Conference (37th, Myrtle Beach, South Carolina, June 6-10, 2004)

    ERIC Educational Resources Information Center

    Smith, Peter, Ed.; Smith, Carol L., Ed.

    2004-01-01

    This 2004 Association of Small Computer Users in Education (ASCUE) conference proceedings presented the theme "Leading Strategic & Cultural Change through Technology." The conference introduced its ASCUE Officers and Directors, and provides abstracts of the pre-conference workshops. The full-text conference papers in this document…

  9. Structural characterization of UHPC waffle bridge deck and connections.

    DOT National Transportation Integrated Search

    2014-07-01

    The AASHTO strategic plan in 2005 for bridge engineering identified extending the service life of bridges and accelerating bridge : construction as two of the grand challenges in bridge engineering. These challenges have the objective of producing sa...

  10. Proceeding of the 1999 Particle Accelerator Conference. Volume 3

    DTIC Science & Technology

    1999-04-02

    conditioning, a laser pulse was irradiated on a copper cath- ode and the photo-emitted beam was accelerated up to 2.9 MeV. An effective quantum...dipole magnet and a vacuum Nd:YAG laser pulse irradiation . As a result, the pumping unit. The gun cavity has two s-band cells made maximu ensergy andlthe...Optimizing beam intensity in the AGS involves a correctors at strategic locations are pulsed to minimize the compromise between conflicting needs to

  11. Clinical engineering department strategic graphical dashboard to enhance maintenance planning and asset management.

    PubMed

    Sloane, Elliot; Rosow, Eric; Adam, Joe; Shine, Dave

    2005-01-01

    The Clinical Engineering (a.k.a. Biomedical Engineering) Department has heretofore lagged in adoption of some of the leading-edge information system tools used in other industries. This present application is part of a DOD-funded SBIR grant to improve the overall management of medical technology, and describes the capabilities that Strategic Graphical Dashboards (SGDs) can afford. This SGD is built on top of an Oracle database, and uses custom-written graphic objects like gauges, fuel tanks, and Geographic Information System (GIS) maps to improve and accelerate decision making.

  12. A Guide to IRUS-II Application Development

    DTIC Science & Technology

    1989-09-01

    Stallard (editors). Research and Develo; nent in Natural Language b’nderstan,;ng as Part of t/i Strategic Computing Program . chapter 3, pages 27-34...Development in Natural Language Processing in the Strategic Computing Program . Compi-nrional Linguistics 12(2):132-136. April-June, 1986. [24] Sidner. C.L...assist developers interested in adapting IRUS-11 to new application domains Chapter 2 provides a general introduction and overviev ,. Chapter 3 describes

  13. Long term validation of an accelerated polishing test procedure for HMA pavements.

    DOT National Transportation Integrated Search

    2013-04-01

    The Ohio Department of Transportation (ODOT) has set strategic goals to improve driving safety by maintaining : smooth pavement surfaces with high skid resistance. ODOT has taken the initiative to monitor pavement : friction on Ohio roadways and reme...

  14. Executive summary of the Strategic Plan for National Institutes of Health Obesity Research.

    PubMed

    Spiegel, Allen M; Alving, Barbara M

    2005-07-01

    The Strategic Plan for National Institutes of Health (NIH) Obesity Research is intended to serve as a guide for coordinating obesity research activities across the NIH and for enhancing the development of new efforts based on identification of areas of greatest scientific opportunity and challenge. Developed by the NIH Obesity Research Task Force with critical input from external scientists and the public, the Strategic Plan reflects a dynamic planning process and presents a multidimensional research agenda, with an interrelated set of goals and strategies for achieving the goals. The major scientific themes around which the Strategic Plan is framed include the following: preventing and treating obesity through lifestyle modification; preventing and treating obesity through pharmacologic, surgical, or other medical approaches; breaking the link between obesity and its associated health conditions; and cross-cutting topics, including health disparities, technology, fostering of interdisciplinary research teams, investigator training, translational research, and education/outreach efforts. Through the efforts described in the Strategic Plan for NIH Obesity Research, the NIH will strive to facilitate and accelerate progress in obesity research to improve public health.

  15. An Innovative Method for Evaluating Strategic Goals in a Public Agency: Conservation Leadership in the U.S. Forest Service

    Treesearch

    David N. Bengston; David P. Fan

    1999-01-01

    This article presents an innovative methodology for evaluating strategic planning goals in a public agency. Computer-coded content analysis was used to evaluate attitudes expressed in about 28,000 on-line news media stories about the U.S. Department of Agriculture Forest Service and its strategic goal of conservation leadership. Three dimensions of conservation...

  16. U.S. Climate Change Science Program. Vision for the Program and Highlights of the Scientific Strategic Plan

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The vision document provides an overview of the Climate Change Science Program (CCSP) long-term strategic plan to enhance scientific understanding of global climate change.This document is a companion to the comprehensive Strategic Plan for the Climate Change Science Program. The report responds to the Presidents direction that climate change research activities be accelerated to provide the best possible scientific information to support public discussion and decisionmaking on climate-related issues.The plan also responds to Section 104 of the Global Change Research Act of 1990, which mandates the development and periodic updating of a long-term national global change research plan coordinated through the National Science and Technology Council.This is the first comprehensive update of a strategic plan for U.S. global change and climate change research since the origal plan for the U.S. Global Change Research Program was adopted at the inception of the program in 1989.

  17. Analysis of optoelectronic strategic planning in Taiwan by artificial intelligence portfolio tool

    NASA Astrophysics Data System (ADS)

    Chang, Rang-Seng

    1992-05-01

    Taiwan ROC has achieved significant advances in the optoelectronic industry with some Taiwan products ranked high in the world market and technology. Six segmentations of optoelectronic were planned. Each one was divided into several strategic items, design artificial intelligent portfolio tool (AIPT) to analyze the optoelectronic strategic planning in Taiwan. The portfolio is designed to provoke strategic thinking intelligently. This computer- generated strategy should be selected and modified by the individual. Some strategies for the development of the Taiwan optoelectronic industry also are discussed in this paper.

  18. A roadmap for the prevention of dementia II: Leon Thal Symposium 2008.

    PubMed

    Khachaturian, Zaven S; Snyder, Peter J; Doody, Rachelle; Aisen, Paul; Comer, Meryl; Dwyer, John; Frank, Richard A; Holzapfel, Andrew; Khachaturian, Ara S; Korczyn, Amos D; Roses, Allen; Simpkins, James W; Schneider, Lon S; Albert, Marilyn S; Egge, Robert; Deves, Aaron; Ferris, Steven; Greenberg, Barry D; Johnson, Carl; Kukull, Walter A; Poirier, Judes; Schenk, Dale; Thies, William; Gauthier, Serge; Gilman, Sid; Bernick, Charles; Cummings, Jeffrey L; Fillit, Howard; Grundman, Michael; Kaye, Jeff; Mucke, Lennart; Reisberg, Barry; Sano, Mary; Pickeral, Oxana; Petersen, Ronald C; Mohs, Richard C; Carrillo, Maria; Corey-Bloom, Jody P; Foster, Norman L; Jacobsen, Steve; Lee, Virginia; Potter, William Z; Sabbagh, Marwan N; Salmon, David; Trojanowski, John Q; Wexler, Nancy; Bain, Lisa J

    2009-03-01

    This document proposes an array of recommendations for a National Plan of Action to accelerate the discovery and development of therapies to delay or prevent the onset of disabling symptoms of Alzheimer's disease. A number of key scientific and public-policy needs identified in this document will be incorporated by the Alzheimer Study Group into a broader National Alzheimer's Strategic Plan, which will be presented to the 111th Congress and the Obama administration in March 2009. The Alzheimer's Strategic Plan is expected to include additional recommendations for governance, family support, healthcare, and delivery of social services.

  19. Advanced Computing Tools and Models for Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  20. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, P.; /Fermilab; Cary, J.

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization for software development and applications accounts for the natural domain areas (beam dynamics, electromagnetics, and advanced acceleration), and all areas depend on the enabling technologies activities, such as solvers and component technology, to deliver the desired performance and integrated simulation environment. The ComPASS applications focus on computationally challenging problems important for design or performance optimization to all major HEP, NP, and BES accelerator facilities. With the cost and complexity of particle accelerators rising, the use of computation to optimize their designs and find improved operating regimes becomes essential, potentially leading to significant cost savings with modest investment.« less

  1. Core Communications

    ERIC Educational Resources Information Center

    Block, Greg; Ross, J. D.; Mulder, David

    2011-01-01

    The website--it is where people go to find out anything and everything about a school, college, or university. In the relatively short life of the Internet, institutional websites have moved from the periphery to center stage and become strategically integral communications and marketing tools. As the flow of information accelerates and new…

  2. Strategic flexibility in computational estimation for Chinese- and Canadian-educated adults.

    PubMed

    Xu, Chang; Wells, Emma; LeFevre, Jo-Anne; Imbo, Ineke

    2014-09-01

    The purpose of the present study was to examine factors that influence strategic flexibility in computational estimation for Chinese- and Canadian-educated adults. Strategic flexibility was operationalized as the percentage of trials on which participants chose the problem-based procedure that best balanced proximity to the correct answer with simplification of the required calculation. For example, on 42 × 57, the optimal problem-based solution is 40 × 60 because 2,400 is closer to the exact answer 2,394 than is 40 × 50 or 50 × 60. In Experiment 1 (n = 50), where participants had free choice of estimation procedures, Chinese-educated participants were more likely to choose the optimal problem-based procedure (80% of trials) than Canadian-educated participants (50%). In Experiment 2 (n = 48), participants had to choose 1 of 3 solution procedures. They showed moderate strategic flexibility that was equal across groups (60%). In Experiment 3 (n = 50), participants were given the same 3 procedure choices as in Experiment 2 but different instructions and explicit feedback. When instructed to respond quickly, both groups showed moderate strategic flexibility as in Experiment 2 (60%). When instructed to respond as accurately as possible or to balance speed and accuracy, they showed very high strategic flexibility (greater than 90%). These findings suggest that solvers will show very different levels of strategic flexibility in response to instructions, feedback, and problem characteristics and that these factors interact with individual differences (e.g., arithmetic skills, nationality) to produce variable response patterns.

  3. Problem-Solving Rules for Genetics.

    ERIC Educational Resources Information Center

    Collins, Angelo

    The categories and applications of strategic knowledge as these relate to problem solving in the area of transmission genetics are examined in this research study. The role of computer simulations in helping students acquire the strategic knowledge necessary to solve realistic transmission genetics problems was emphasized. The Genetics…

  4. Accelerated Application Development: The ORNL Titan Experience

    DOE PAGES

    Joubert, Wayne; Archibald, Richard K.; Berrill, Mark A.; ...

    2015-05-09

    The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less

  5. Accelerated application development: The ORNL Titan experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joubert, Wayne; Archibald, Rick; Berrill, Mark

    2015-08-01

    The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less

  6. Risk and Infrastructure Science Center - Global Security Sciences

    Science.gov Websites

    delivers scientific tools and methodologies to inform decision making regarding the most challenging Sciences ASD Accelerator Systems AES APS Engineering Support XSD X-ray Science Physical Sciences and Leadership Strategic Alliance for Global Energy Solutions Overview Leadership Systems Science Center Overview

  7. Accelerated Climate Modeling for Energy (ACME) Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaudhary, Aashish

    Seven Department of Energy (DOE) national laboratories, Universities, and Kitware, undertook a coordinated effort to build an Earth system modeling capability tailored to meet the climate change research strategic objectives of the DOE Office of Science, as well as the broader climate change application needs of other DOE programs.

  8. Teaching Deanna to Read: A Case Study.

    ERIC Educational Resources Information Center

    Tiwald, Jeanette M.

    1995-01-01

    Describes a Reading Recovery case study involving a first-grade student who was at risk for learning how to read and write. Notes that this student learned to read strategically and was accelerated to the average band in her classroom after 81 Reading Recovery lessons, without first knowing the alphabet. (SR)

  9. 76 FR 13984 - Cloud Computing Forum & Workshop III

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-15

    ... DEPARTMENT OF COMMERCE National Institute of Standards and Technology Cloud Computing Forum... public workshop. SUMMARY: NIST announces the Cloud Computing Forum & Workshop III to be held on April 7... provide information on the NIST strategic and tactical Cloud Computing program, including progress on the...

  10. Team Culture and Business Strategy Simulation Performance

    ERIC Educational Resources Information Center

    Ritchie, William J.; Fornaciari, Charles J.; Drew, Stephen A. W.; Marlin, Dan

    2013-01-01

    Many capstone strategic management courses use computer-based simulations as core pedagogical tools. Simulations are touted as assisting students in developing much-valued skills in strategy formation, implementation, and team management in the pursuit of superior strategic performance. However, despite their rich nature, little is known regarding…

  11. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Panagiotis; /Fermilab; Cary, John

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less

  12. The ASCI Network for SC 2000: Gigabyte Per Second Networking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PRATT, THOMAS J.; NAEGLE, JOHN H.; MARTINEZ JR., LUIS G.

    2001-11-01

    This document highlights the Discom's Distance computing and communication team activities at the 2000 Supercomputing conference in Dallas Texas. This conference is sponsored by the IEEE and ACM. Sandia's participation in the conference has now spanned a decade, for the last five years Sandia National Laboratories, Los Alamos National Lab and Lawrence Livermore National Lab have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives, Program rubric to demonstrate ASCI's emerging capabilities in computational science and our combined expertise in high performance computer science and communication networking developments within the program. At SC 2000, DISCOM demonstratedmore » an infrastructure. DISCOM2 uses this forum to demonstrate and focus communication and pre-standard implementation of 10 Gigabit Ethernet, the first gigabyte per second data IP network transfer application, and VPN technology that enabled a remote Distributed Resource Management tools demonstration. Additionally a national OC48 POS network was constructed to support applications running between the show floor and home facilities. This network created the opportunity to test PSE's Parallel File Transfer Protocol (PFTP) across a network that had similar speed and distances as the then proposed DISCOM WAN. The SCINET SC2000 showcased wireless networking and the networking team had the opportunity to explore this emerging technology while on the booth. This paper documents those accomplishments, discusses the details of their convention exhibit floor. We also supported the production networking needs of the implementation, and describes how these demonstrations supports DISCOM overall strategies in high performance computing networking.« less

  13. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  14. Acceleration and Velocity Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truax, Roger

    2015-01-01

    A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.

  15. Economic Modeling as a Component of Academic Strategic Planning.

    ERIC Educational Resources Information Center

    MacKinnon, Joyce; Sothmann, Mark; Johnson, James

    2001-01-01

    Computer-based economic modeling was used to enable a school of allied health to define outcomes, identify associated costs, develop cost and revenue models, and create a financial planning system. As a strategic planning tool, it assisted realistic budgeting and improved efficiency and effectiveness. (Contains 18 references.) (SK)

  16. Cyber Strategic Inquiry: Enabling Change through a Strategic Simulation and Megacommunity Concept

    DTIC Science & Technology

    2009-02-01

    malicious software embedded in thumb drives and CDs that thwarted protections, such as antivirus software , on computers. In the scenario, these...Executives for National Security • The Carlyle Group • Cassat Corporation • Cisco Systems, Inc. • Cyveillance • General Dynamics • General Motors

  17. Factors Influencing the Adoption of Cloud Computing by Decision Making Managers

    ERIC Educational Resources Information Center

    Ross, Virginia Watson

    2010-01-01

    Cloud computing is a growing field, addressing the market need for access to computing resources to meet organizational computing requirements. The purpose of this research is to evaluate the factors that influence an organization in their decision whether to adopt cloud computing as a part of their strategic information technology planning.…

  18. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    NASA Astrophysics Data System (ADS)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads.

  19. Information Technology: Making It All Fit. Track VIII: Academic Computing Strategy.

    ERIC Educational Resources Information Center

    CAUSE, Boulder, CO.

    Six papers from the 1988 CAUSE conference's Track VIII, Academic Computing Strategy, are presented. They include: "Achieving Institution-Wide Computer Fluency: A Five-Year Retrospective" (Paul J. Plourde); "A Methodology and a Policy for Building and Implementing a Strategic Computer Plan" (Frank B. Thomas); "Aligning…

  20. Controlling Flexible Robot Arms Using High Speed Dynamics Process

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan (Inventor)

    1996-01-01

    A robot manipulator controller for a flexible manipulator arm having plural bodies connected at respective movable hinges and flexible in plural deformation modes corresponding to respective modal spatial influence vectors relating deformations of plural spaced nodes of respective bodies to the plural deformation modes, operates by computing articulated body quantities for each of the bodies from respective modal spatial influence vectors, obtaining specified body forces for each of the bodies, and computing modal deformation accelerations of the nodes and hinge accelerations of the hinges from the specified body forces, from the articulated body quantities and from the modal spatial influence vectors. In one embodiment of the invention, the controller further operates by comparing the accelerations thus computed to desired manipulator motion to determine a motion discrepancy, and correcting the specified body forces so as to reduce the motion discrepancy. The manipulator bodies and hinges are characterized by respective vectors of deformation and hinge configuration variables, and computing modal deformation accelerations and hinge accelerations is carried out for each one of the bodies beginning with the outermost body by computing a residual body force from a residual body force of a previous body and from the vector of deformation and hinge configuration variables, computing a resultant hinge acceleration from the body force, the residual body force and the articulated hinge inertia, and revising the residual body force modal body acceleration.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Computing and Communications (C) Division is responsible for the Laboratory's Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called Grand Challenge'' problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Computing and Communications (C) Division is responsible for the Laboratory`s Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called ``Grand Challenge`` problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less

  3. Strategic Leadership for Education Reform: Lessons from the Statewide Systemic Initiatives Program. CPRE Policy Briefs RB-41

    ERIC Educational Resources Information Center

    Heck, Daniel J.; Weiss, Iris R.

    2005-01-01

    In 1990, the National Science Foundation (NSF) created the Statewide Systemic Initiative Program. The solicitation issued by the Directorate for Science and Engineering Education sought proposals "for projects intended to broaden the impact, accelerate the pace, and increase the effectiveness of improvements in science, mathematics, and…

  4. Accelerated Plan for Closing the Gaps by 2015

    ERIC Educational Resources Information Center

    Texas Higher Education Coordinating Board, 2010

    2010-01-01

    Texas launched its ambitious strategic plan for higher education, "Closing the Gaps by 2015," in the year 2000 to create a statewide vision for closing the higher education gaps within Texas and between Texas and other leading states. The plan focuses on bringing Texas to national parity in four critical areas of higher education:…

  5. Research and Development of Wires and Cables for High-Field Accelerator Magnets

    DOE PAGES

    Barzi, Emanuela; Zlobin, Alexander V.

    2016-02-18

    The latest strategic plans for High Energy Physics endorse steadfast superconducting magnet technology R&D for future Energy Frontier Facilities. This includes 10 to 16 T Nb3Sn accelerator magnets for the luminosity upgrades of the Large Hadron Collider and eventually for a future 100 TeV scale proton-protonmore » $(pp)$ collider. This paper describes the multi-decade R&D investment in the $$Nb_3Sn$$ superconductor technology, which was crucial to produce the first reproducible 10 to 12 T accelerator-quality dipoles and quadrupoles, as well as their scale-up. We also indicate prospective research areas in superconducting $$Nb_3Sn$$ wires and cables to achieve the next goals for superconducting accelerator magnets. Emphasis is on increasing performance and decreasing costs while pushing the $$Nb_3Sn$$ technology to its limits for future $pp$ colliders.« less

  6. Future of Department of Defense Cloud Computing Amid Cultural Confusion

    DTIC Science & Technology

    2013-03-01

    enterprise cloud - computing environment and transition to a public cloud service provider. Services have started the development of individual cloud - computing environments...endorsing cloud computing . It addresses related issues in matters of service culture changes and how strategic leaders will dictate the future of cloud ...through data center consolidation and individual Service provided cloud computing .

  7. Terascale Computing in Accelerator Science and Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, Kwok

    2002-08-21

    We have entered the age of ''terascale'' scientific computing. Processors and system architecture both continue to evolve; hundred-teraFLOP computers are expected in the next few years, and petaFLOP computers toward the end of this decade are conceivable. This ever-increasing power to solve previously intractable numerical problems benefits almost every field of science and engineering and is revolutionizing some of them, notably including accelerator physics and technology. At existing accelerators, it will help us optimize performance, expand operational parameter envelopes, and increase reliability. Design decisions for next-generation machines will be informed by unprecedented comprehensive and accurate modeling, as well as computer-aidedmore » engineering; all this will increase the likelihood that even their most advanced subsystems can be commissioned on time, within budget, and up to specifications. Advanced computing is also vital to developing new means of acceleration and exploring the behavior of beams under extreme conditions. With continued progress it will someday become reasonable to speak of a complete numerical model of all phenomena important to a particular accelerator.« less

  8. Developing the Strategic Thinking of Instructional Leaders. Occasional Paper No. 13.

    ERIC Educational Resources Information Center

    Hallinger, Philip; McCary, C. E.

    Emerging research on instructional leadership is examined in this paper, with a focus on the new perspective on strategic thinking. The main theme is that research must address the reasoning that underlies the exercise of leadership rather than describe discrete behaviors of effective leaders. A computer simulation designed to facilitate the…

  9. Austin Community College Learning Resource Services Strategic Plan, 1992-1997.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX.

    Designed as a planning tool and a statement of philosophy and mission, this five-part strategic planning report provides information on the activities, goals, and review processes of the Learning Resource Services (LRS) at Austin Community College in Austin, Texas. The LRS combines library services, access to computer terminals, and other…

  10. Improving Students' Self-Efficacy in Strategic Management: The Relative Impact of Cases and Simulations.

    ERIC Educational Resources Information Center

    Tompson, George H.; Dass, Parshotam

    2000-01-01

    Investigates the relative contribution of computer simulations and case studies for improving undergraduate students' self-efficacy in strategic management courses. Results of pre-and post-test data, regression analysis, and analysis of variance show that simulations result in significantly higher improvement in self-efficacy than case studies.…

  11. Developing Oral Proficiency with VoiceThread: Learners' Strategic Uses and Views

    ERIC Educational Resources Information Center

    Dugartsyrenova, Vera A.; Sardegna, Veronica G.

    2017-01-01

    This study explored Russian as a foreign language (RFL) learners' self-reported strategic uses of "VoiceThread" (VT)--a multimodal asynchronous computer-mediated communication tool--in order to gain insights into learner perceived effectiveness of VT for second language (L2) oral skills development and to determine the factors that…

  12. Neural signatures of strategic types in a two-person bargaining game

    PubMed Central

    Bhatt, Meghana A.; Lohrenz, Terry; Camerer, Colin F.; Montague, P. Read

    2010-01-01

    The management and manipulation of our own social image in the minds of others requires difficult and poorly understood computations. One computation useful in social image management is strategic deception: our ability and willingness to manipulate other people's beliefs about ourselves for gain. We used an interpersonal bargaining game to probe the capacity of players to manage their partner's beliefs about them. This probe parsed the group of subjects into three behavioral types according to their revealed level of strategic deception; these types were also distinguished by neural data measured during the game. The most deceptive subjects emitted behavioral signals that mimicked a more benign behavioral type, and their brains showed differential activation in right dorsolateral prefrontal cortex and left Brodmann area 10 at the time of this deception. In addition, strategic types showed a significant correlation between activation in the right temporoparietal junction and expected payoff that was absent in the other groups. The neurobehavioral types identified by the game raise the possibility of identifying quantitative biomarkers for the capacity to manipulate and maintain a social image in another person's mind. PMID:21041646

  13. 2011 Computation Directorate Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, D L

    2012-04-11

    From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilitiesmore » and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.« less

  14. A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Willert, Jeffrey; Park, H.; Knoll, D. A.

    2014-10-01

    Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.

  15. Dissociable neural representations of reinforcement and belief prediction errors underlie strategic learning

    PubMed Central

    Zhu, Lusha; Mathewson, Kyle E.; Hsu, Ming

    2012-01-01

    Decision-making in the presence of other competitive intelligent agents is fundamental for social and economic behavior. Such decisions require agents to behave strategically, where in addition to learning about the rewards and punishments available in the environment, they also need to anticipate and respond to actions of others competing for the same rewards. However, whereas we know much about strategic learning at both theoretical and behavioral levels, we know relatively little about the underlying neural mechanisms. Here, we show using a multi-strategy competitive learning paradigm that strategic choices can be characterized by extending the reinforcement learning (RL) framework to incorporate agents’ beliefs about the actions of their opponents. Furthermore, using this characterization to generate putative internal values, we used model-based functional magnetic resonance imaging to investigate neural computations underlying strategic learning. We found that the distinct notions of prediction errors derived from our computational model are processed in a partially overlapping but distinct set of brain regions. Specifically, we found that the RL prediction error was correlated with activity in the ventral striatum. In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning. These results suggest a model of strategic behavior where learning arises from interaction of dissociable reinforcement and belief-based inputs. PMID:22307594

  16. Dissociable neural representations of reinforcement and belief prediction errors underlie strategic learning.

    PubMed

    Zhu, Lusha; Mathewson, Kyle E; Hsu, Ming

    2012-01-31

    Decision-making in the presence of other competitive intelligent agents is fundamental for social and economic behavior. Such decisions require agents to behave strategically, where in addition to learning about the rewards and punishments available in the environment, they also need to anticipate and respond to actions of others competing for the same rewards. However, whereas we know much about strategic learning at both theoretical and behavioral levels, we know relatively little about the underlying neural mechanisms. Here, we show using a multi-strategy competitive learning paradigm that strategic choices can be characterized by extending the reinforcement learning (RL) framework to incorporate agents' beliefs about the actions of their opponents. Furthermore, using this characterization to generate putative internal values, we used model-based functional magnetic resonance imaging to investigate neural computations underlying strategic learning. We found that the distinct notions of prediction errors derived from our computational model are processed in a partially overlapping but distinct set of brain regions. Specifically, we found that the RL prediction error was correlated with activity in the ventral striatum. In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning. These results suggest a model of strategic behavior where learning arises from interaction of dissociable reinforcement and belief-based inputs.

  17. The Impact of Computer-Based Information Systems Upon School and School District Administration.

    ERIC Educational Resources Information Center

    Hansen, Thomas; And Others

    1978-01-01

    This study investigates the ways in which computer-based information systems interact with the strategic planning, management control, and operational control in 11 Minnesota school districts. (Author/IRT)

  18. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  19. Computer Assistance in Information Work. Part I: Conceptual Framework for Improving the Computer/User Interface in Information Work. Part II: Catalog of Acceleration, Augmentation, and Delegation Functions in Information Work.

    ERIC Educational Resources Information Center

    Paisley, William; Butler, Matilda

    This study of the computer/user interface investigated the role of the computer in performing information tasks that users now perform without computer assistance. Users' perceptual/cognitive processes are to be accelerated or augmented by the computer; a long term goal is to delegate information tasks entirely to the computer. Cybernetic and…

  20. SU-E-T-278: Realization of Dose Verification Tool for IMRT Plan Based On DPM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Jinfeng; Cao, Ruifen; Dai, Yumei

    Purpose: To build a Monte Carlo dose verification tool for IMRT Plan by implementing a irradiation source model into DPM code. Extend the ability of DPM to calculate any incident angles and irregular-inhomogeneous fields. Methods: With the virtual source and the energy spectrum which unfolded from the accelerator measurement data,combined with optimized intensity maps to calculate the dose distribution of the irradiation irregular-inhomogeneous field. The irradiation source model of accelerator was substituted by a grid-based surface source. The contour and the intensity distribution of the surface source were optimized by ARTS (Accurate/Advanced Radiotherapy System) optimization module based on the tumormore » configuration. The weight of the emitter was decided by the grid intensity. The direction of the emitter was decided by the combination of the virtual source and the emitter emitting position. The photon energy spectrum unfolded from the accelerator measurement data was adjusted by compensating the contaminated electron source. For verification, measured data and realistic clinical IMRT plan were compared with DPM dose calculation. Results: The regular field was verified by comparing with the measured data. It was illustrated that the differences were acceptable (<2% inside the field, 2–3mm in the penumbra). The dose calculation of irregular field by DPM simulation was also compared with that of FSPB (Finite Size Pencil Beam) and the passing rate of gamma analysis was 95.1% for peripheral lung cancer. The regular field and the irregular rotational field were all within the range of permitting error. The computing time of regular fields were less than 2h, and the test of peripheral lung cancer was 160min. Through parallel processing, the adapted DPM could complete the calculation of IMRT plan within half an hour. Conclusion: The adapted parallelized DPM code with irradiation source model is faster than classic Monte Carlo codes. Its computational accuracy and speed satisfy the clinical requirement, and it is expectable to be a Monte Carlo dose verification tool for IMRT Plan. Strategic Priority Research Program of the China Academy of Science(XDA03040000); National Natural Science Foundation of China (81101132)« less

  1. Commissioning the GTA accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sander, O.R.; Atkins, W.H.; Bolme, G.O.

    1992-09-01

    The Ground Test Accelerator (GTA) is supported by the Strategic Defense command as part of their Neutral Particle Beam (NPB) program. Neutral particles have the advantage that in space they are unaffected by the earth`s magnetic field and travel in straight lines unless they enter the earth`s atmosphere and become charged by stripping. Heavy particles are difficult to stop and can probe the interior of space vehicles; hence, NPB can function as a discriminator between warheads and decoys. We are using GTA to resolve the physics and engineering issues related to accelerating, focusing, and steering a high-brightness, high-current H{sup -}more » beam and then neutralizing it. Our immediate goal is to produce a 24-MeV, 50mA device with a 2% duty factor.« less

  2. Commissioning the GTA accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sander, O.R.; Atkins, W.H.; Bolme, G.O.

    1992-01-01

    The Ground Test Accelerator (GTA) is supported by the Strategic Defense command as part of their Neutral Particle Beam (NPB) program. Neutral particles have the advantage that in space they are unaffected by the earth's magnetic field and travel in straight lines unless they enter the earth's atmosphere and become charged by stripping. Heavy particles are difficult to stop and can probe the interior of space vehicles; hence, NPB can function as a discriminator between warheads and decoys. We are using GTA to resolve the physics and engineering issues related to accelerating, focusing, and steering a high-brightness, high-current H{sup -}more » beam and then neutralizing it. Our immediate goal is to produce a 24-MeV, 50mA device with a 2% duty factor.« less

  3. Strategic Planning for Computer-Based Educational Technology.

    ERIC Educational Resources Information Center

    Bozeman, William C.

    1984-01-01

    Offers educational practitioners direction for the development of a master plan for the implementation and application of computer-based educational technology by briefly examining computers in education, discussing organizational change from a theoretical perspective, and presenting an overview of the planning strategy known as the planning and…

  4. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within themore » community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.« less

  5. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seager, M

    2007-03-22

    The Advanced Simulation and Computing (ASC) Program (formerly know as Accelerated Strategic Computing Initiative, ASCI) has led the world in capability computing for the last ten years. Capability computing is defined as a world-class platform (in the Top10 of the Top500.org list) with scientific simulations running at scale on the platform. Example systems are ASCI Red, Blue-Pacific, Blue-Mountain, White, Q, RedStorm, and Purple. ASC applications have scaled to multiple thousands of CPUs and accomplished a long list of mission milestones on these ASC capability platforms. However, the computing demands of the ASC and Stockpile Stewardship programs also include a vastmore » number of smaller scale runs for day-to-day simulations. Indeed, every 'hero' capability run requires many hundreds to thousands of much smaller runs in preparation and post processing activities. In addition, there are many aspects of the Stockpile Stewardship Program (SSP) that can be directly accomplished with these so-called 'capacity' calculations. The need for capacity is now so great within the program that it is increasingly difficult to allocate the computer resources required by the larger capability runs. To rectify the current 'capacity' computing resource shortfall, the ASC program has allocated a large portion of the overall ASC platforms budget to 'capacity' systems. In addition, within the next five to ten years the Life Extension Programs (LEPs) for major nuclear weapons systems must be accomplished. These LEPs and other SSP programmatic elements will further drive the need for capacity calculations and hence 'capacity' systems as well as future ASC capability calculations on 'capability' systems. To respond to this new workload analysis, the ASC program will be making a large sustained strategic investment in these capacity systems over the next ten years, starting with the United States Government Fiscal Year 2007 (GFY07). However, given the growing need for 'capability' systems as well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.« less

  6. Switching Perspectives: From a Language Teacher to a Designer of Language Learning with New Technologies

    ERIC Educational Resources Information Center

    Kuure, Leena; Molin-Juustila, Tonja; Keisanen, Tiina; Riekki, Maritta; Iivari, Netta; Kinnula, Marianne

    2016-01-01

    Despite abundant research on educational technology and strategic input in the field, various surveys have shown that (language) teachers do not seem to embrace in their teaching the full potential of information and communication technology available in our everyday life. Language students soon entering the professional field could accelerate the…

  7. Developing Managerial Learning Styles in the Context of the Strategic Application of Information and Communications Technologies.

    ERIC Educational Resources Information Center

    Holtham, Clive; Courtney, Nigel

    2001-01-01

    Training for 561 executives in the use of information and communications technologies was based on a model, the Executive Learning Ladder. Results indicated that sense making was accelerated when conducted in peer groups before being extended to less-experienced managers. Learning preference differences played a role. (Contains 38 references.) (SK)

  8. SPAWAR Strategic Plan Execution Year 2017

    DTIC Science & Technology

    2017-01-11

    the PEO C4I domain. Completed C4I Baseline implementation activities including product roadmap system reviews, realignment of product fielding within...preloading applications in the CANES production facility to reduce installation timelines • Implemented Installation Management Office alignment and...software update process • For candidate technologies (endeavors) in the innovation pipeline, identified key attributes and acceleration factors that

  9. Strategic Mobility Alternatives for the 1980s. Volume 2. Analysis and Conclusions

    DTIC Science & Technology

    1977-03-01

    with renewed emphasis on the Nxi-mod; "II rl ++|llllll’’’P.7= UNCLASSIFIED -xxiv- o Continued, even accelerated, acquisition of the spares neces- sary...H. Birch, J. Houston, L. L. Moorhous, J. Pederson , and H. B. Turin. UNCLASSIID UNCLAS8WEED S~- iii- CONTENTS PREFACE iii SURM ARY

  10. Training in the Food and Beverages Sector in Ireland. Report for the FORCE Programme. First Edition.

    ERIC Educational Resources Information Center

    Hunt, Deirdre; And Others

    The food and beverage industry is of overwhelming strategic importance to the Irish economy. It is also one of the fastest changing sectors. Recent trends in this largely indigenous industry in recent years include the following: globalization, large and accelerating capital outlay, company consolidation, added value product, enhanced quality…

  11. 2009 Strategic Plan for Autism Spectrum Disorder Research. NIH Publication No. 09-7465

    ERIC Educational Resources Information Center

    Interagency Autism Coordinating Committee, 2009

    2009-01-01

    In response to the heightened societal concern over autism spectrum disorder (ASD), Congress passed the Combating Autism Act (CAA) of 2006 (P.L. 109-416). Through this Act, Congress intended to rapidly increase, accelerate the pace and improve coordination of scientific discovery in ASD research. The CAA requires the Interagency Autism…

  12. Exploring the Trajectory of Latinas into the Role of Community College President: Perspectives from Current Community College Leadership

    ERIC Educational Resources Information Center

    Reinhart, Ruth

    2017-01-01

    Historically, community colleges have been and continue to be a gateway of opportunity for many students. As Hispanic students continue to engage in community college institutions at accelerated rates, it is important that institutions of higher education make strategic adjustments. In response to the impending shortage of community college…

  13. Teaching Subtraction and Multiplication with Regrouping Using the Concrete-Representational-Abstract Sequence and Strategic Instruction Model

    ERIC Educational Resources Information Center

    Flores, Margaret M.; Hinton, Vanessa; Strozier, Shaunita D.

    2014-01-01

    Based on Common Core Standards (2010), mathematics interventions should emphasize conceptual understanding of numbers and operations as well as fluency. For students at risk for failure, the concrete-representational-abstract (CRA) sequence and the Strategic Instruction Model (SIM) have been shown effective in teaching computation with an emphasis…

  14. Networking as a Strategic Tool, 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This conference focuses on the technological advances, pitfalls, requirements, and trends involved in planning and implementing an effective computer network system. The basic theme of the conference is networking as a strategic tool. Tutorials and conference presentations explore the technology and methods involved in this rapidly changing field. Future directions are explored from a global, as well as local, perspective.

  15. Structuring Assignments to Improve Understanding and Presentation Skills: Experiential Learning in the Capstone Strategic Management Team Presentation

    ERIC Educational Resources Information Center

    Helms, Marilyn M.; Whitesell, Melissa

    2017-01-01

    In the strategic management course, students select, analyze, and present viable future alternatives based on information provided in cases or computer simulations. Rather than understanding the entire process, the student's focus is on the final presentation. Chickering's (1977) research on active learning suggests students learn more effectively…

  16. New technology continues to invade healthcare. What are the strategic implications/outcomes?

    PubMed

    Smith, Coy

    2004-01-01

    Healthcare technology continues to advance and be implemented in healthcare organizations. Nurse executives must strategically evaluate the effectiveness of each proposed system or device using a strategic planning process. Clinical information systems, computer-chip-based clinical monitoring devices, advanced Web-based applications with remote, wireless communication devices, clinical decision support software--all compete for capital and registered nurse salary dollars. The concept of clinical transformation is developed with new models of care delivery being supported by technology rather than driving care delivery. Senior nursing leadership's role in clinical transformation and healthcare technology implementation is developed. Proposed standards, expert group action, business and consumer groups, and legislation are reviewed as strategic drivers in the development of an electronic health record and healthcare technology. A matrix of advancing technology and strategic decision-making parameters are outlined.

  17. Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.

    PubMed

    Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei

    2013-04-01

    The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.

  18. Nonlinear theory of diffusive acceleration of particles by shock waves

    NASA Astrophysics Data System (ADS)

    Malkov, M. A.; Drury, L. O'C.

    2001-04-01

    Among the various acceleration mechanisms which have been suggested as responsible for the nonthermal particle spectra and associated radiation observed in many astrophysical and space physics environments, diffusive shock acceleration appears to be the most successful. We review the current theoretical understanding of this process, from the basic ideas of how a shock energizes a few reactionless particles to the advanced nonlinear approaches treating the shock and accelerated particles as a symbiotic self-organizing system. By means of direct solution of the nonlinear problem we set the limit to the test-particle approximation and demonstrate the fundamental role of nonlinearity in shocks of astrophysical size and lifetime. We study the bifurcation of this system, proceeding from the hydrodynamic to kinetic description under a realistic condition of Bohm diffusivity. We emphasize the importance of collective plasma phenomena for the global flow structure and acceleration efficiency by considering the injection process, an initial stage of acceleration and, the related aspects of the physics of collisionless shocks. We calculate the injection rate for different shock parameters and different species. This, together with differential acceleration resulting from nonlinear large-scale modification, determines the chemical composition of accelerated particles. The review concentrates on theoretical and analytical aspects but our strategic goal is to link the fundamental theoretical ideas with the rapidly growing wealth of observational data.

  19. 160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA)

    PubMed Central

    Li, Isaac TS; Shum, Warren; Truong, Kevin

    2007-01-01

    Background To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. Results In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. Conclusion This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching. PMID:17555593

  20. 160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA).

    PubMed

    Li, Isaac T S; Shum, Warren; Truong, Kevin

    2007-06-07

    To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching.

  1. Strategizing Computer-Supported Collaborative Learning toward Knowledge Building

    ERIC Educational Resources Information Center

    Mukama, Evode

    2010-01-01

    The purpose of this paper is to explore how university students can develop knowledge in small task-based groups while acquiring hands-on computer skills. Inspired by the sociocultural perspective, this study presents a theoretical framework on co-construction of knowledge and on computer-supported collaborative learning. The participants were…

  2. 2016 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Jim; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.

  3. One Teacher's Role in Promoting Understanding in Mental Computation

    ERIC Educational Resources Information Center

    Heirdsfield, Ann

    2005-01-01

    This paper reports the teacher actions that promoted the development of students' mental computation. A Year 3 teacher engaged her class in developing mental computation strategies over a ten-week period. Two overarching issues that appeared to support learning were establishing connections and encouraging strategic thinking. (Contains 2 figures.)…

  4. Computing Models for FPGA-Based Accelerators

    PubMed Central

    Herbordt, Martin C.; Gu, Yongfeng; VanCourt, Tom; Model, Josh; Sukhwani, Bharat; Chiu, Matt

    2011-01-01

    Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling. PMID:21603152

  5. Cloud Computing and Validated Learning for Accelerating Innovation in IoT

    ERIC Educational Resources Information Center

    Suciu, George; Todoran, Gyorgy; Vulpe, Alexandru; Suciu, Victor; Bulca, Cristina; Cheveresan, Romulus

    2015-01-01

    Innovation in Internet of Things (IoT) requires more than just creation of technology and use of cloud computing or big data platforms. It requires accelerated commercialization or aptly called go-to-market processes. To successfully accelerate, companies need a new type of product development, the so-called validated learning process.…

  6. A Strategic Approach to Network Defense: Framing the Cloud

    DTIC Science & Technology

    2011-03-10

    accepted network defensive principles, to reduce risks associated with emerging virtualization capabilities and scalability of cloud computing . This expanded...defensive framework can assist enterprise networking and cloud computing architects to better design more secure systems.

  7. Analysis of ballistic transport in nanoscale devices by using an accelerated finite element contact block reduction approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, H.; Li, G., E-mail: gli@clemson.edu

    2014-08-28

    An accelerated Finite Element Contact Block Reduction (FECBR) approach is presented for computational analysis of ballistic transport in nanoscale electronic devices with arbitrary geometry and unstructured mesh. Finite element formulation is developed for the theoretical CBR/Poisson model. The FECBR approach is accelerated through eigen-pair reduction, lead mode space projection, and component mode synthesis techniques. The accelerated FECBR is applied to perform quantum mechanical ballistic transport analysis of a DG-MOSFET with taper-shaped extensions and a DG-MOSFET with Si/SiO{sub 2} interface roughness. The computed electrical transport properties of the devices obtained from the accelerated FECBR approach and associated computational cost as amore » function of system degrees of freedom are compared with those obtained from the original CBR and direct inversion methods. The performance of the accelerated FECBR in both its accuracy and efficiency is demonstrated.« less

  8. Strategic control in decision-making under uncertainty.

    PubMed

    Venkatraman, Vinod; Huettel, Scott A

    2012-04-01

    Complex economic decisions - whether investing money for retirement or purchasing some new electronic gadget - often involve uncertainty about the likely consequences of our choices. Critical for resolving that uncertainty are strategic meta-decision processes, which allow people to simplify complex decision problems, evaluate outcomes against a variety of contexts, and flexibly match behavior to changes in the environment. In recent years, substantial research has implicated the dorsomedial prefrontal cortex (dmPFC) in the flexible control of behavior. However, nearly all such evidence comes from paradigms involving executive function or response selection, not complex decision-making. Here, we review evidence that demonstrates that the dmPFC contributes to strategic control in complex decision-making. This region contains a functional topography such that the posterior dmPFC supports response-related control, whereas the anterior dmPFC supports strategic control. Activation in the anterior dmPFC signals changes in how a decision problem is represented, which in turn can shape computational processes elsewhere in the brain. Based on these findings, we argue for both generalized contributions of the dmPFC to cognitive control, and specific computational roles for its subregions depending upon the task demands and context. We also contend that these strategic considerations are likely to be critical for decision-making in other domains, including interpersonal interactions in social settings. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  9. Strategic Control in Decision Making under Uncertainty

    PubMed Central

    Venkatraman, Vinod; Huettel, Scott

    2012-01-01

    Complex economic decisions – whether investing money for retirement or purchasing some new electronic gadget – often involve uncertainty about the likely consequences of our choices. Critical for resolving that uncertainty are strategic meta-decision processes, which allow people to simplify complex decision problems, to evaluate outcomes against a variety of contexts, and to flexibly match behavior to changes in the environment. In recent years, substantial research implicates the dorsomedial prefrontal cortex (dmPFC) in the flexible control of behavior. However, nearly all such evidence comes from paradigms involving executive function or response selection, not complex decision making. Here, we review evidence that demonstrates that the dmPFC contributes to strategic control in complex decision making. This region contains a functional topography such that the posterior dmPFC supports response-related control while the anterior dmPFC supports strategic control. Activation in the anterior dmPFC signals changes in how a decision problem is represented, which in turn can shape computational processes elsewhere in the brain. Based on these findings, we argue both for generalized contributions of the dmPFC to cognitive control, and for specific computational roles for its subregions depending upon the task demands and context. We also contend that these strategic considerations are also likely to be critical for decision making in other domains, including interpersonal interactions in social settings. PMID:22487037

  10. Charting the expansion of strategic exploratory behavior during adolescence.

    PubMed

    Somerville, Leah H; Sasse, Stephanie F; Garrad, Megan C; Drysdale, Andrew T; Abi Akar, Nadine; Insel, Catherine; Wilson, Robert C

    2017-02-01

    Although models of exploratory decision making implicate a suite of strategies that guide the pursuit of information, the developmental emergence of these strategies remains poorly understood. This study takes an interdisciplinary perspective, merging computational decision making and developmental approaches to characterize age-related shifts in exploratory strategy from adolescence to young adulthood. Participants were 149 12-28-year-olds who completed a computational explore-exploit paradigm that manipulated reward value, information value, and decision horizon (i.e., the utility that information holds for future choices). Strategic directed exploration, defined as information seeking selective for long time horizons, emerged during adolescence and maintained its level through early adulthood. This age difference was partially driven by adolescents valuing immediate reward over new information. Strategic random exploration, defined as stochastic choice behavior selective for long time horizons, was invoked at comparable levels over the age range, and predicted individual differences in attitudes toward risk taking in daily life within the adolescent portion of the sample. Collectively, these findings reveal an expansion of the diversity of strategic exploration over development, implicate distinct mechanisms for directed and random exploratory strategies, and suggest novel mechanisms for adolescent-typical shifts in decision making. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Robust Derivation of Risk Reduction Strategies

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Port, Daniel; Feather, Martin

    2007-01-01

    Effective risk reduction strategies can be derived mechanically given sufficient characterization of the risks present in the system and the effectiveness of available risk reduction techniques. In this paper, we address an important question: can we reliably expect mechanically derived risk reduction strategies to be better than fixed or hand-selected risk reduction strategies, given that the quantitative assessment of risks and risk reduction techniques upon which mechanical derivation is based is difficult and likely to be inaccurate? We consider this question relative to two methods for deriving effective risk reduction strategies: the strategic method defined by Kazman, Port et al [Port et al, 2005], and the Defect Detection and Prevention (DDP) tool [Feather & Cornford, 2003]. We performed a number of sensitivity experiments to evaluate how inaccurate knowledge of risk and risk reduction techniques affect the performance of the strategies computed by the Strategic Method compared to a variety of alternative strategies. The experimental results indicate that strategies computed by the Strategic Method were significantly more effective than the alternative risk reduction strategies, even when knowledge of risk and risk reduction techniques was very inaccurate. The robustness of the Strategic Method suggests that its use should be considered in a wide range of projects.

  12. Advanced induction accelerator designs for ground based and space based FELs

    NASA Astrophysics Data System (ADS)

    Birx, Daniel

    1994-04-01

    The primary goal of this program was to improve the performance of induction accelerators with particular regards to their being used to drive Free Electron Lasers (FEL's). It is hoped that FEL's operating at visible wavelengths might someday be used to beam power from earth to extraterrestrial locations. One application of this technology might be strategic theater defense, but this power source might be used to propel vehicles or supplement solar energized systems. Our path toward achieving this goal was directed first toward optimization of the nonlinear magnetic material used in induction accelerator construction and secondly at the overall design in terms of cost, size and efficiency. We began this research effort with an in depth study into the properties of various nonlinear magnetic materials. With the data on nonlinear magnetic materials, so important to the optimization of efficiency, in hand, we envisioned a new induction accelerator design where all of the components were packaged together in one container. This induction accelerator module would combine an /ll-solid-state, nonlinear magnetic driver and the induction accelerator cells all in one convenient package. Each accelerator module (denoted SNOMAD-IVB) would produce 1.0 MeV of acceleration with the exception of the SNOMAD-IV injector module which would produce 0.5 MeV of acceleration for an electron beam current up to 1000 amperes.

  13. Computing Services Planning, Downsizing, and Organization at the University of Alberta.

    ERIC Educational Resources Information Center

    Beltrametti, Monica

    1993-01-01

    In a six-month period, the University of Alberta (Canada) campus computing services department formulated a strategic plan, and downsized and reorganized to meet financial constraints and respond to changing technology, especially distributed computing. The new department is organized to react more effectively to trends in technology and user…

  14. A Research Program in Computer Technology. 1986 Annual Technical Report

    DTIC Science & Technology

    1989-08-01

    1986 (Annual Technical Report I July 1985 - June 1986 A Research Program in Computer Technology ISI/SR-87-178 U S C INFORMA-TION S C I EN C ES...Program in Computer Technology (Unclassified) 12. PERSONAL AUTHOR(S) 151 Research Staff 13a. TYPE OF REPORT 113b. TIME COVERED 14 DATE OF REPORT (Yeer...survivable networks 17. distributed processing, local networks, personal computers, workstation environment 18. computer acquisition, Strategic Computing 19

  15. 2013 Progress Report -- DOE Joint Genome Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-11-01

    In October 2012, we introduced a 10-Year Strategic Vision [http://bit.ly/JGI-Vision] for the Institute. A central focus of this Strategic Vision is to bridge the gap between sequenced genomes and an understanding of biological functions at the organism and ecosystem level. This involves the continued massive-scale generation of sequence data, complemented by orthogonal new capabilities to functionally annotate these large sequence data sets. Our Strategic Vision lays out a path to guide our decisions and ensure that the evolving set of experimental and computational capabilities available to DOE JGI users will continue to enable groundbreaking science.

  16. Strengthening Deterrence for 21st Century Strategic Conflicts and Competition: Accelerating Adaptation and Integration - Annotated Bibliography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, B.; Durkalec, J. J.

    This was the fourth in a series of annual events convened at Livermore to exploring the emerging place of the “new domains” in U.S. deterrence strategies. The purposes of the series are to facilitate the emergence of a community of interest that cuts across the policy, military, and technical communities and to inform laboratory strategic planning. U.S. allies have also been drawn into the conversation, as U.S. deterrence strategies are in part about their protection. Discussion in these workshops is on a not-for-attribution basis. It is also makes no use of classified information. On this occasion, there were nearly 100more » participants from a dozen countries.« less

  17. Use of a collaborative tool to simplify the outsourcing of preclinical safety studies: an insight into the AstraZeneca-Charles River Laboratories strategic relationship.

    PubMed

    Martin, Frederic D C; Benjamin, Amanda; MacLean, Ruth; Hollinshead, David M; Landqvist, Claire

    2017-12-01

    In 2012, AstraZeneca entered into a strategic relationship with Charles River Laboratories whereby preclinical safety packages comprising safety pharmacology, toxicology, formulation analysis, in vivo ADME, bioanalysis and pharmacokinetics studies are outsourced. New processes were put in place to ensure seamless workflows with the aim of accelerating the delivery of new medicines to patients. Here, we describe in more detail the AstraZeneca preclinical safety outsourcing model and the way in which a collaborative tool has helped to translate the processes in AstraZeneca and Charles River Laboratories into simpler integrated workflows that are efficient and visible across the two companies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Biodiversity and ecosystem services science for a sustainable planet: the DIVERSITAS vision for 2012-20.

    PubMed

    Larigauderie, Anne; Prieur-Richard, Anne-Hélène; Mace, Georgina M; Lonsdale, Mark; Mooney, Harold A; Brussaard, Lijbert; Cooper, David; Cramer, Wolfgang; Daszak, Peter; Díaz, Sandra; Duraiappah, Anantha; Elmqvist, Thomas; Faith, Daniel P; Jackson, Louise E; Krug, Cornelia; Leadley, Paul W; Le Prestre, Philippe; Matsuda, Hiroyuki; Palmer, Margaret; Perrings, Charles; Pulleman, Mirjam; Reyers, Belinda; Rosa, Eugene A; Scholes, Robert J; Spehn, Eva; Turner, Bl; Yahara, Tetsukazu

    2012-02-01

    DIVERSITAS, the international programme on biodiversity science, is releasing a strategic vision presenting scientific challenges for the next decade of research on biodiversity and ecosystem services: "Biodiversity and Ecosystem Services Science for a Sustainable Planet". This new vision is a response of the biodiversity and ecosystem services scientific community to the accelerating loss of the components of biodiversity, as well as to changes in the biodiversity science-policy landscape (establishment of a Biodiversity Observing Network - GEO BON, of an Intergovernmental science-policy Platform on Biodiversity and Ecosystem Services - IPBES, of the new Future Earth initiative; and release of the Strategic Plan for Biodiversity 2011-2020). This article presents the vision and its core scientific challenges.

  19. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne E.

    2013-01-01

    We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.

  20. Computational Accelerator Physics. Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisognano, J.J.; Mondelli, A.A.

    1997-04-01

    The sixty two papers appearing in this volume were presented at CAP96, the Computational Accelerator Physics Conference held in Williamsburg, Virginia from September 24{minus}27,1996. Science Applications International Corporation (SAIC) and the Thomas Jefferson National Accelerator Facility (Jefferson lab) jointly hosted CAP96, with financial support from the U.S. department of Energy`s Office of Energy Research and the Office of Naval reasearch. Topics ranged from descriptions of specific codes to advanced computing techniques and numerical methods. Update talks were presented on nearly all of the accelerator community`s major electromagnetic and particle tracking codes. Among all papers, thirty of them are abstracted formore » the Energy Science and Technology database.(AIP)« less

  1. Quantum Accelerators for High-performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less

  2. Acceleration of Cherenkov angle reconstruction with the new Intel Xeon/FPGA compute platform for the particle identification in the LHCb Upgrade

    NASA Astrophysics Data System (ADS)

    Faerber, Christian

    2017-10-01

    The LHCb experiment at the LHC will upgrade its detector by 2018/2019 to a ‘triggerless’ readout scheme, where all the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40 MHz. This increases the data bandwidth from the detector down to the Event Filter farm to 40 TBit/s, which also has to be processed to select the interesting proton-proton collision for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered for use inside the new Event Filter farm. In the high performance computing sector more and more FPGA compute accelerators are used to improve the compute performance and reduce the power consumption (e.g. in the Microsoft Catapult project and Bing search engine). Also for the LHCb upgrade the usage of an experimental FPGA accelerated computing platform in the Event Building or in the Event Filter farm is being considered and therefore tested. This platform from Intel hosts a general CPU and a high performance FPGA linked via a high speed link which is for this platform a QPI link. On the FPGA an accelerator is implemented. The used system is a two socket platform from Intel with a Xeon CPU and an FPGA. The FPGA has cache-coherent memory access to the main memory of the server and can collaborate with the CPU. As a first step, a computing intensive algorithm to reconstruct Cherenkov angles for the LHCb RICH particle identification was successfully ported in Verilog to the Intel Xeon/FPGA platform and accelerated by a factor of 35. The same algorithm was ported to the Intel Xeon/FPGA platform with OpenCL. The implementation work and the performance will be compared. Also another FPGA accelerator the Nallatech 385 PCIe accelerator with the same Stratix V FPGA were tested for performance. The results show that the Intel Xeon/FPGA platforms, which are built in general for high performance computing, are also very interesting for the High Energy Physics community.

  3. Accelerating Sequences in the Presence of Metal by Exploiting the Spatial Distribution of Off-Resonance

    PubMed Central

    Smith, Matthew R.; Artz, Nathan S.; Koch, Kevin M.; Samsonov, Alexey; Reeder, Scott B.

    2014-01-01

    Purpose To demonstrate feasibility of exploiting the spatial distribution of off-resonance surrounding metallic implants for accelerating multispectral imaging techniques. Theory Multispectral imaging (MSI) techniques perform time-consuming independent 3D acquisitions with varying RF frequency offsets to address the extreme off-resonance from metallic implants. Each off-resonance bin provides a unique spatial sensitivity that is analogous to the sensitivity of a receiver coil, and therefore provides a unique opportunity for acceleration. Methods Fully sampled MSI was performed to demonstrate retrospective acceleration. A uniform sampling pattern across off-resonance bins was compared to several adaptive sampling strategies using a total hip replacement phantom. Monte Carlo simulations were performed to compare noise propagation of two of these strategies. With a total knee replacement phantom, positive and negative off-resonance bins were strategically sampled with respect to the B0 field to minimize aliasing. Reconstructions were performed with a parallel imaging framework to demonstrate retrospective acceleration. Results An adaptive sampling scheme dramatically improved reconstruction quality, which was supported by the noise propagation analysis. Independent acceleration of negative and positive off-resonance bins demonstrated reduced overlapping of aliased signal to improve the reconstruction. Conclusion This work presents the feasibility of acceleration in the presence of metal by exploiting the spatial sensitivities of off-resonance bins. PMID:24431210

  4. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellano, T.; De Palma, L.; Laneve, D.

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  5. Minimum time acceleration of aircraft turbofan engines by using an algorithm based on nonlinear programming

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1977-01-01

    Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.

  6. Convergence acceleration of viscous flow computations

    NASA Technical Reports Server (NTRS)

    Johnson, G. M.

    1982-01-01

    A multiple-grid convergence acceleration technique introduced for application to the solution of the Euler equations by means of Lax-Wendroff algorithms is extended to treat compressible viscous flow. Computational results are presented for the solution of the thin-layer version of the Navier-Stokes equations using the explicit MacCormack algorithm, accelerated by a convective coarse-grid scheme. Extensions and generalizations are mentioned.

  7. Effective correlator for RadioAstron project

    NASA Astrophysics Data System (ADS)

    Sergeev, Sergey

    This paper presents the implementation of programme FX-correlator for Very Long Baseline Interferometry, adapted for the project "RadioAstron". Software correlator implemented for heterogeneous computing systems using graphics accelerators. It is shown that for the task interferometry implementation of the graphics hardware has a high efficiency. The host processor of heterogeneous computing system, performs the function of forming the data flow for graphics accelerators, the number of which corresponds to the number of frequency channels. So, for the Radioastron project, such channels is seven. Each accelerator is perform correlation matrix for all bases for a single frequency channel. Initial data is converted to the floating-point format, is correction for the corresponding delay function and computes the entire correlation matrix simultaneously. Calculation of the correlation matrix is performed using the sliding Fourier transform. Thus, thanks to the compliance of a solved problem for architecture graphics accelerators, managed to get a performance for one processor platform Kepler, which corresponds to the performance of this task, the computing cluster platforms Intel on four nodes. This task successfully scaled not only on a large number of graphics accelerators, but also on a large number of nodes with multiple accelerators.

  8. Acceleration and torque feedback for robotic control - Experimental results

    NASA Technical Reports Server (NTRS)

    Mclnroy, John E.; Saridis, George N.

    1990-01-01

    Gross motion control of robotic manipulators typically requires significant on-line computations to compensate for nonlinear dynamics due to gravity, Coriolis, centripetal, and friction nonlinearities. One controller proposed by Luo and Saridis avoids these computations by feeding back joint acceleration and torque. This study implements the controller on a Puma 600 robotic manipulator. Joint acceleration measurement is obtained by measuring linear accelerations of each joint, and deriving a computationally efficient transformation from the linear measurements to the angular accelerations. Torque feedback is obtained by using the previous torque sent to the joints. The implementation has stability problems on the Puma 600 due to the extremely high gains inherent in the feedback structure. Since these high gains excite frequency modes in the Puma 600, the algorithm is modified to decrease the gain inherent in the feedback structure. The resulting compensator is stable and insensitive to high frequency unmodeled dynamics. Moreover, a second compensator is proposed which uses acceleration and torque feedback, but still allows nonlinear terms to be fed forward. Thus, by feeding the increment in the easily calculated gravity terms forward, improved responses are obtained. Both proposed compensators are implemented, and the real time results are compared to those obtained with the computed torque algorithm.

  9. Effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing.

    PubMed

    Yoo, Won-Gyu

    2015-01-01

    [Purpose] This study showed the effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing. [Subjects] Twenty-one male computer workers voluntarily consented to participate in this study. They consisted of 7 workers who could type 200-300 characteristics/minute, 7 workers who could type 300-400 characteristics/minute, and 7 workers who could type 400-500 chracteristics/minute. [Methods] This study was used to measure the acceleration and peak contact pressure of the fingertips for different typing speed groups using an accelerometer and CONFORMat system. [Results] The fingertip contact pressure was increased in the high typing speed group compared with the low and medium typing speed groups. The fingertip acceleration was increased in the high typing speed group compared with the low and medium typing speed groups. [Conclusion] The results of the present study indicate that a fast typing speed cause continuous pressure stress to be applied to the fingers, thereby creating pain in the fingers.

  10. The Material Supply Adjustment Process in RAMF-SM, Step 2

    DTIC Science & Technology

    2016-06-01

    contain. The Risk Assessment and Mitigation Framework for Strategic Materials (RAMF-SM) is a suite of mathematical models and databases that has been...Risk Assessment and Mitigation Framework for Strategic Materials (RAMF-SM) is a suite of mathematical models and databases used to support the...and computes material shortfalls.1 Several mathematical models and dozens of databases, encompassing thousands of data items, support the

  11. Combining Acceleration and Displacement Dependent Modal Frequency Responses Using an MSC/NASTRAN DMAP Alter

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.

    1996-01-01

    Solving for dynamic responses of free-free launch vehicle/spacecraft systems acted upon by buffeting winds is commonly performed throughout the aerospace industry. Due to the unpredictable nature of this wind loading event, these problems are typically solved using frequency response random analysis techniques. To generate dynamic responses for spacecraft with statically-indeterminate interfaces, spacecraft contractors prefer to develop models which have response transformation matrices developed for mode acceleration data recovery. This method transforms spacecraft boundary accelerations and displacements into internal responses. Unfortunately, standard MSC/NASTRAN modal frequency response solution sequences cannot be used to combine acceleration- and displacement-dependent responses required for spacecraft mode acceleration data recovery. External user-written computer codes can be used with MSC/NASTRAN output to perform such combinations, but these methods can be labor and computer resource intensive. Taking advantage of the analytical and computer resource efficiencies inherent within MS C/NASTRAN, a DMAP Alter has been developed to combine acceleration- and displacement-dependent modal frequency responses for performing spacecraft mode acceleration data recovery. The Alter has been used successfully to efficiently solve a common aerospace buffeting wind analysis.

  12. Acceleration and Velocity Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truax, Roger

    2016-01-01

    A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an Autoregressive Moving Average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. shape sensing, fiber optic strain sensor, system equivalent reduction and expansion process.

  13. Working and strategic memory deficits in schizophrenia

    NASA Technical Reports Server (NTRS)

    Stone, M.; Gabrieli, J. D.; Stebbins, G. T.; Sullivan, E. V.

    1998-01-01

    Working memory and its contribution to performance on strategic memory tests in schizophrenia were studied. Patients (n = 18) and control participants (n = 15), all men, received tests of immediate memory (forward digit span), working memory (listening, computation, and backward digit span), and long-term strategic (free recall, temporal order, and self-ordered pointing) and nonstrategic (recognition) memory. Schizophrenia patients performed worse on all tests. Education, verbal intelligence, and immediate memory capacity did not account for deficits in working memory in schizophrenia patients. Reduced working memory capacity accounted for group differences in strategic memory but not in recognition memory. Working memory impairment may be central to the profile of impaired cognitive performance in schizophrenia and is consistent with hypothesized frontal lobe dysfunction associated with this disease. Additional medial-temporal dysfunction may account for the recognition memory deficit.

  14. COMPUTATIONAL TOXICOLOGY - OBJECTIVE 2: DEVELOPING APPROACHES FOR PRIORITIZING CHEMICALS FOR SUBSEQUENT SCREENING AND TESTING

    EPA Science Inventory

    One of the strategic objectives of the Computational Toxicology Program is to develop approaches for prioritizing chemicals for subsequent screening and testing. Approaches currently available for this process require extensive resources. Therefore, less costly and time-extensi...

  15. Acceleration of saddle-point searches with machine learning.

    PubMed

    Peterson, Andrew A

    2016-08-21

    In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the number of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.

  16. Acceleration of saddle-point searches with machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Andrew A., E-mail: andrew-peterson@brown.edu

    In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the numbermore » of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.« less

  17. Observed differences in upper extremity forces, muscle efforts, postures, velocities and accelerations across computer activities in a field study of office workers.

    PubMed

    Bruno Garza, J L; Eijckelhof, B H W; Johnson, P W; Raina, S M; Rynell, P W; Huysmans, M A; van Dieën, J H; van der Beek, A J; Blatter, B M; Dennerlein, J T

    2012-01-01

    This study, a part of the PRedicting Occupational biomechanics in OFfice workers (PROOF) study, investigated whether there are differences in field-measured forces, muscle efforts, postures, velocities and accelerations across computer activities. These parameters were measured continuously for 120 office workers performing their own work for two hours each. There were differences in nearly all forces, muscle efforts, postures, velocities and accelerations across keyboard, mouse and idle activities. Keyboard activities showed a 50% increase in the median right trapezius muscle effort when compared to mouse activities. Median shoulder rotation changed from 25 degrees internal rotation during keyboard use to 15 degrees external rotation during mouse use. Only keyboard use was associated with median ulnar deviations greater than 5 degrees. Idle activities led to the greatest variability observed in all muscle efforts and postures measured. In future studies, measurements of computer activities could be used to provide information on the physical exposures experienced during computer use. Practitioner Summary: Computer users may develop musculoskeletal disorders due to their force, muscle effort, posture and wrist velocity and acceleration exposures during computer use. We report that many physical exposures are different across computer activities. This information may be used to estimate physical exposures based on patterns of computer activities over time.

  18. Controlling under-actuated robot arms using a high speed dynamics process

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan (Inventor); Rodriguez, Guillermo (Inventor)

    1994-01-01

    The invention controls an under-actuated manipulator by first obtaining predetermined active joint accelerations of the active joints and the passive joint friction forces of the passive joints, then computing articulated body qualities for each of the joints from the current positions of the links, and finally computing from the articulated body qualities and from the active joint accelerations and the passive joint forces, active joint forces of the active joints. Ultimately, the invention transmits servo commands to the active joint forces thus computed to the respective ones of the joint servos. The computation of the active joint forces is accomplished using a recursive dynamics algorithm. In this computation, an inward recursion is first carried out for each link, beginning with the outermost link in order to compute the residual link force of each link from the active joint acceleration if the corresponding joint is active, or from the known passive joint force if the corresponding joint is passive. Then, an outward recursion is carried out for each link in which the active joint force is computed from the residual link force if the corresponding joint is active or the passive joint acceleration is computed from the residual link force if the corresponding joint is passive.

  19. A Report on Army Science Planning and Strategy 2016

    DTIC Science & Technology

    2017-06-01

    Army Research Laboratory (ARL) hosted a series of meetings in fall 2016 to develop a strategic vision for Army Science. Meeting topics were vetted...reduce maturation time . • Support internal Army research efforts to enhance Army investments in multiscale modeling to accelerate the rate of...requirement are research needs including cross-modal approaches to enabling real- time human comprehension under constraints of bandwidth, information

  20. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    ERIC Educational Resources Information Center

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  1. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches

    NASA Astrophysics Data System (ADS)

    Duchaineau, Mark

    2001-06-01

    Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.

  2. Unaligned instruction relocation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.

    In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less

  3. Unaligned instruction relocation

    DOEpatents

    Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.

    2018-01-23

    In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.

  4. Fragment-Based Drug Design Facilitated by Protein-Templated Click Chemistry: Fragment Linking and Optimization of Inhibitors of the Aspartic Protease Endothiapepsin.

    PubMed

    Mondal, Milon; Unver, M Yagiz; Pal, Asish; Bakker, Matthijs; Berrier, Stephan P; Hirsch, Anna K H

    2016-10-10

    There is an urgent need for the development of efficient methodologies that accelerate drug discovery. We demonstrate that the strategic combination of fragment linking/optimization and protein-templated click chemistry is an efficient and powerful method that accelerates the hit-identification process for the aspartic protease endothiapepsin. The best binder, which inhibits endothiapepsin with an IC 50 value of 43 μm, represents the first example of triazole-based inhibitors of endothiapepsin. Our strategy could find application on a whole range of drug targets. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  5. Strategic Performance Management Evaluation for the Navy’s Splice Local Area Networks.

    DTIC Science & Technology

    1985-04-01

    Communications Agency (DCA)/Federal Data Corporation (FDC) literature; an extensive survey of academic and professional book and article literature... interesting closing note on strategic planning characteristics is that the period during which collapse or disaster develops is of the same order as the...accepted set of standards. In computer performance, such things as paging rates , throughput, input/output channel usage, turnaround * 32 EM-. time

  6. How an Organization's Environmental Orientation Impacts Environmental Performance and Its Resultant Financial Performance through Green Computing Hiring Practices: An Empirical Investigation of the Natural Resource-Based View of the Firm

    ERIC Educational Resources Information Center

    Aken, Andrew Joseph

    2010-01-01

    This dissertation uses the logic embodied in Strategic Fit Theory, the Natural Resource- Based View of the Firm (NRBV), strategic human resource management, and other relevant literature streams to empirically demonstrate how the environmental orientation of a firm's strategy impacts their environmental performance and resultant financial…

  7. Laboratory Directed Research and Development Program FY 2008 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    editor, Todd C Hansen

    2009-02-23

    The Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab or LBNL) is a multi-program national research facility operated by the University of California for the Department of Energy (DOE). As an integral element of DOE's National Laboratory System, Berkeley Lab supports DOE's missions in fundamental science, energy resources, and environmental quality. Berkeley Lab programs advance four distinct goals for DOE and the nation: (1) To perform leading multidisciplinary research in the computing sciences, physical sciences, energy sciences, biosciences, and general sciences in a manner that ensures employee and public safety and protection of the environment. (2) To develop and operatemore » unique national experimental facilities for qualified investigators. (3) To educate and train future generations of scientists and engineers to promote national science and education goals. (4) To transfer knowledge and technological innovations and to foster productive relationships among Berkeley Lab's research programs, universities, and industry in order to promote national economic competitiveness. Berkeley Lab's research and the Laboratory Directed Research and Development (LDRD) program support DOE's Strategic Themes that are codified in DOE's 2006 Strategic Plan (DOE/CF-0010), with a primary focus on Scientific Discovery and Innovation. For that strategic theme, the Fiscal Year (FY) 2008 LDRD projects support each one of the three goals through multiple strategies described in the plan. In addition, LDRD efforts support the four goals of Energy Security, the two goals of Environmental Responsibility, and Nuclear Security (unclassified fundamental research that supports stockpile safety and nonproliferation programs). The LDRD program supports Office of Science strategic plans, including the 20-year Scientific Facilities Plan and the Office of Science Strategic Plan. The research also supports the strategic directions periodically under consideration and review by the Office of Science Program Offices, such as LDRD projects germane to new research facility concepts and new fundamental science directions. Berkeley Lab LDRD program also play an important role in leveraging DOE capabilities for national needs. The fundamental scientific research and development conducted in the program advances the skills and technologies of importance to our Work For Others (WFO) sponsors. Among many directions, these include a broad range of health-related science and technology of interest to the National Institutes of Health, breast cancer and accelerator research supported by the Department of Defense, detector technologies that should be useful to the Department of Homeland Security, and particle detection that will be valuable to the Environmental Protection Agency. The Berkeley Lab Laboratory Directed Research and Development Program FY2008 report is compiled from annual reports submitted by principal investigators following the close of the fiscal year. This report describes the supported projects and summarizes their accomplishments. It constitutes a part of the LDRD program planning and documentation process that includes an annual planning cycle, project selection, implementation, and review.« less

  8. Proposal for an Accelerator R&D User Facility at Fermilab's Advanced Superconducting Test Accelerator (ASTA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Church, M.; Edwards, H.; Harms, E.

    2013-10-01

    Fermilab is the nation’s particle physics laboratory, supported by the DOE Office of High Energy Physics (OHEP). Fermilab is a world leader in accelerators, with a demonstrated track-record— spanning four decades—of excellence in accelerator science and technology. We describe the significant opportunity to complete, in a highly leveraged manner, a unique accelerator research facility that supports the broad strategic goals in accelerator science and technology within the OHEP. While the US accelerator-based HEP program is oriented toward the Intensity Frontier, which requires modern superconducting linear accelerators and advanced highintensity storage rings, there are no accelerator test facilities that support themore » accelerator science of the Intensity Frontier. Further, nearly all proposed future accelerators for Discovery Science will rely on superconducting radiofrequency (SRF) acceleration, yet there are no dedicated test facilities to study SRF capabilities for beam acceleration and manipulation in prototypic conditions. Finally, there are a wide range of experiments and research programs beyond particle physics that require the unique beam parameters that will only be available at Fermilab’s Advanced Superconducting Test Accelerator (ASTA). To address these needs we submit this proposal for an Accelerator R&D User Facility at ASTA. The ASTA program is based on the capability provided by an SRF linac (which provides electron beams from 50 MeV to nearly 1 GeV) and a small storage ring (with the ability to store either electrons or protons) to enable a broad range of beam-based experiments to study fundamental limitations to beam intensity and to develop transformative approaches to particle-beam generation, acceleration and manipulation which cannot be done elsewhere. It will also establish a unique resource for R&D towards Energy Frontier facilities and a test-bed for SRF accelerators and high brightness beam applications in support of the OHEP mission of Accelerator Stewardship.« less

  9. A genome-wide RNAi screen identifies potential drug targets in a C. elegans model of α1-antitrypsin deficiency.

    PubMed

    O'Reilly, Linda P; Long, Olivia S; Cobanoglu, Murat C; Benson, Joshua A; Luke, Cliff J; Miedel, Mark T; Hale, Pamela; Perlmutter, David H; Bahar, Ivet; Silverman, Gary A; Pak, Stephen C

    2014-10-01

    α1-Antitrypsin deficiency (ATD) is a common genetic disorder that can lead to end-stage liver and lung disease. Although liver transplantation remains the only therapy currently available, manipulation of the proteostasis network (PN) by small molecule therapeutics offers great promise. To accelerate the drug-discovery process for this disease, we first developed a semi-automated high-throughput/content-genome-wide RNAi screen to identify PN modifiers affecting the accumulation of the α1-antitrypsin Z mutant (ATZ) in a Caenorhabditis elegans model of ATD. We identified 104 PN modifiers, and these genes were used in a computational strategy to identify human ortholog-ligand pairs. Based on rigorous selection criteria, we identified four FDA-approved drugs directed against four different PN targets that decreased the accumulation of ATZ in C. elegans. We also tested one of the compounds in a mammalian cell line with similar results. This methodology also proved useful in confirming drug targets in vivo, and predicting the success of combination therapy. We propose that small animal models of genetic disorders combined with genome-wide RNAi screening and computational methods can be used to rapidly, economically and strategically prime the preclinical discovery pipeline for rare and neglected diseases with limited therapeutic options. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Relativistic and noise effects on multiplayer Prisoners' dilemma with entangling initial states

    NASA Astrophysics Data System (ADS)

    Goudarzi, H.; Rashidi, S. S.

    2017-11-01

    Three-players Prisoners' dilemma (Alice, Bob and Colin) is studied in the presence of a single collective environment effect as a noise. The environmental effect is coupled with final states by a particular form of Kraus operators K_0 and K_1 through amplitude damping channel. We introduce the decoherence parameter 0≤p≤1 to the corresponding noise matrices, in order to controling the rate of environment influence on payoff of each players. Also, we consider the Unruh effect on the payoff of player, who is located at a noninertial frame. We suppose that two players (Bob and Colin) are in Rindler region I from Minkowski space-time, and move with same uniform acceleration (r_b=r_c) and frequency mode. The game is begun with the classical strategies cooperation ( C) and defection ( D) accessible to each player. Furthermore, the players are allowed to access the quantum strategic space ( Q and M). The quantum entanglement is coupled with initial classical states by the parameter γ \\in [0,π /2]. Using entangled initial states by exerting an unitary operator \\hat{J} as entangling gate, the quantum game (competition between Prisoners, as a three-qubit system) is started by choosing the strategies from classical or quantum strategic space. Arbitrarily chosen strategy by each player can lead to achieving profiles, which can be considered as Nash equilibrium or Pareto optimal. It is shown that in the presence of noise effect, choosing quantum strategy Q results in a winning payoff against the classical strategy D and, for example, the strategy profile ( Q, D, C) is Pareto optimal. We find that the unfair miracle move of Eisert from quantum strategic space is an effective strategy for accelerated players in decoherence mode (p=1) of the game.

  11. Achieving the HIV Prevention Impact of Voluntary Medical Male Circumcision: Lessons and Challenges for Managing Programs

    PubMed Central

    Sgaier, Sema K.; Reed, Jason B.; Thomas, Anne; Njeuhmeli, Emmanuel

    2014-01-01

    Voluntary medical male circumcision (VMMC) is capable of reducing the risk of sexual transmission of HIV from females to males by approximately 60%. In 2007, the WHO and the Joint United Nations Programme on HIV/AIDS (UNAIDS) recommended making VMMC part of a comprehensive HIV prevention package in countries with a generalized HIV epidemic and low rates of male circumcision. Modeling studies undertaken in 2009–2011 estimated that circumcising 80% of adult males in 14 priority countries in Eastern and Southern Africa within five years, and sustaining coverage levels thereafter, could avert 3.4 million HIV infections within 15 years and save US$16.5 billion in treatment costs. In response, WHO/UNAIDS launched the Joint Strategic Action Framework for accelerating the scale-up of VMMC for HIV prevention in Southern and Eastern Africa, calling for 80% coverage of adult male circumcision by 2016. While VMMC programs have grown dramatically since inception, they appear unlikely to reach this goal. This review provides an overview of findings from the PLOS Collection “Voluntary Medical Male Circumcision for HIV Prevention: Improving Quality, Efficiency, Cost Effectiveness, and Demand for Services during an Accelerated Scale-up.” The use of devices for VMMC is also explored. We propose emphasizing management solutions to help VMMC programs in the priority countries achieve the desired impact of averting the greatest possible number of HIV infections. Our recommendations include advocating for prioritization and funding of VMMC, increasing strategic targeting to achieve the goal of reducing HIV incidence, focusing on programmatic efficiency, exploring the role of new technologies, rethinking demand creation, strengthening data use for decision-making, improving governments' program management capacity, strategizing for sustainability, and maintaining a flexible scale-up strategy informed by a strong monitoring, learning, and evaluation platform. PMID:24800840

  12. Achieving the HIV prevention impact of voluntary medical male circumcision: lessons and challenges for managing programs.

    PubMed

    Sgaier, Sema K; Reed, Jason B; Thomas, Anne; Njeuhmeli, Emmanuel

    2014-05-01

    Voluntary medical male circumcision (VMMC) is capable of reducing the risk of sexual transmission of HIV from females to males by approximately 60%. In 2007, the WHO and the Joint United Nations Programme on HIV/AIDS (UNAIDS) recommended making VMMC part of a comprehensive HIV prevention package in countries with a generalized HIV epidemic and low rates of male circumcision. Modeling studies undertaken in 2009-2011 estimated that circumcising 80% of adult males in 14 priority countries in Eastern and Southern Africa within five years, and sustaining coverage levels thereafter, could avert 3.4 million HIV infections within 15 years and save US$16.5 billion in treatment costs. In response, WHO/UNAIDS launched the Joint Strategic Action Framework for accelerating the scale-up of VMMC for HIV prevention in Southern and Eastern Africa, calling for 80% coverage of adult male circumcision by 2016. While VMMC programs have grown dramatically since inception, they appear unlikely to reach this goal. This review provides an overview of findings from the PLOS Collection "Voluntary Medical Male Circumcision for HIV Prevention: Improving Quality, Efficiency, Cost Effectiveness, and Demand for Services during an Accelerated Scale-up." The use of devices for VMMC is also explored. We propose emphasizing management solutions to help VMMC programs in the priority countries achieve the desired impact of averting the greatest possible number of HIV infections. Our recommendations include advocating for prioritization and funding of VMMC, increasing strategic targeting to achieve the goal of reducing HIV incidence, focusing on programmatic efficiency, exploring the role of new technologies, rethinking demand creation, strengthening data use for decision-making, improving governments' program management capacity, strategizing for sustainability, and maintaining a flexible scale-up strategy informed by a strong monitoring, learning, and evaluation platform.

  13. Applying Strategic Visualization(Registered Trademark) to Lunar and Planetary Mission Design

    NASA Technical Reports Server (NTRS)

    Frassanito, John R.; Cooke, D. R.

    2002-01-01

    NASA teams, such as the NASA Exploration Team (NEXT), utilize advanced computational visualization processes to develop mission designs and architectures for lunar and planetary missions. One such process, Strategic Visualization (trademark), is a tool used extensively to help mission designers visualize various design alternatives and present them to other participants of their team. The participants, which may include NASA, industry, and the academic community, are distributed within a virtual network. Consequently, computer animation and other digital techniques provide an efficient means to communicate top-level technical information among team members. Today,Strategic Visualization(trademark) is used extensively both in the mission design process within the technical community, and to communicate the value of space exploration to the general public. Movies and digital images have been generated and shown on nationally broadcast television and the Internet, as well as in magazines and digital media. In our presentation will show excerpts of a computer-generated animation depicting the reference Earth/Moon L1 Libration Point Gateway architecture. The Gateway serves as a staging corridor for human expeditions to the lunar poles and other surface locations. Also shown are crew transfer systems and current reference lunar excursion vehicles as well as the Human and robotic construction of an inflatable telescope array for deployment to the Sun/Earth Libration Point.

  14. Accelerating Climate and Weather Simulations through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  15. Profiles of Motivated Self-Regulation in College Computer Science Courses: Differences in Major versus Required Non-Major Courses

    ERIC Educational Resources Information Center

    Shell, Duane F.; Soh, Leen-Kiat

    2013-01-01

    The goal of the present study was to utilize a profiling approach to understand differences in motivation and strategic self-regulation among post-secondary STEM students in major versus required non-major computer science courses. Participants were 233 students from required introductory computer science courses (194 men; 35 women; 4 unknown) at…

  16. State Strategic Planning for Technology. Issuegram 38.

    ERIC Educational Resources Information Center

    McCune, Shirley

    This brief publication provides general background on issues related to using microcomputers for instruction and suggests ways in which computer technologies can be included in state education improvement plans. Specific computer assisted instruction (CAI) uses mentioned are individual drill and practice and developing higher order skills. Three…

  17. Accelerating artificial intelligence with reconfigurable computing

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw

    Reconfigurable computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated by placing the computationally intense portions of an algorithm into reconfigurable hardware. Reconfigurable computing combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be changed over the lifetime of the system. Similar to an ASIC, reconfigurable systems provide a method to map circuits into hardware. Reconfigurable systems therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Such a field, where there is many different algorithms which can be accelerated, is an artificial intelligence. This paper presents example hardware implementations of Artificial Neural Networks, Genetic Algorithms and Expert Systems.

  18. Sustainable Development Strategy for Russian Mineral Resources Extracting Economy

    NASA Astrophysics Data System (ADS)

    Dotsenko, Elena; Ezdina, Natalya; Prilepskaya, Angelina; Pivnyk, Kirill

    2017-11-01

    The immaturity of strategic and conceptual documents in the sphere of sustainable development of the Russian economy had a negative impact on long-term strategic forecasting of its neo-industrialization. At the present stage, the problems of overcoming the mineral and raw material dependence, the negative structural shift of the Russian economy, the acceleration of the rates of economic growth, the reduction of technological gap from the developed countries become strategically in demand. The modern structure of the Russian economy, developed within the framework of the proposed market model, does not generate a sustainable type of development. It became obvious that in conditions of the market processes' entropy, without neo-industrial changes, the reconstruction of industry on a new convergence-technological basis and without increasing the share of high technology production the instability of macroeconomic system, the risks of environmental and economic security of Russia are growing. Therefore, today we need a transition from forming one industry development strategy to the national one that will take into account both the social and economic and environmental challenges facing Russia as a mineral resources extracting country.

  19. Vacuum Brazing of Accelerator Components

    NASA Astrophysics Data System (ADS)

    Singh, Rajvir; Pant, K. K.; Lal, Shankar; Yadav, D. P.; Garg, S. R.; Raghuvanshi, V. K.; Mundra, G.

    2012-11-01

    Commonly used materials for accelerator components are those which are vacuum compatible and thermally conductive. Stainless steel, aluminum and copper are common among them. Stainless steel is a poor heat conductor and not very common in use where good thermal conductivity is required. Aluminum and copper and their alloys meet the above requirements and are frequently used for the above purpose. The accelerator components made of aluminum and its alloys using welding process have become a common practice now a days. It is mandatory to use copper and its other grades in RF devices required for accelerators. Beam line and Front End components of the accelerators are fabricated from stainless steel and OFHC copper. Fabrication of components made of copper using welding process is very difficult and in most of the cases it is impossible. Fabrication and joining in such cases is possible using brazing process especially under vacuum and inert gas atmosphere. Several accelerator components have been vacuum brazed for Indus projects at Raja Ramanna Centre for Advanced Technology (RRCAT), Indore using vacuum brazing facility available at RRCAT, Indore. This paper presents details regarding development of the above mentioned high value and strategic components/assemblies. It will include basics required for vacuum brazing, details of vacuum brazing facility, joint design, fixturing of the jobs, selection of filler alloys, optimization of brazing parameters so as to obtain high quality brazed joints, brief description of vacuum brazed accelerator components etc.

  20. A Study on Strategic Planning and Procurement of Medicals in Uganda’s Regional Referral Hospitals

    PubMed Central

    2016-01-01

    This study was an analysis of the effect of strategic planning on procurement of medicals in Uganda’s regional referral hospitals (RRH’s). Medicals were defined as essential medicines, medical devices and medical equipment. The Ministry of Health (MOH) has been carrying out strategic planning for the last 15 years via the Health Sector Strategic Plans. Their assumption was that strategic planning would translate to strategic procurement and consequently, availability of medicals in the RRH’s. However, despite the existence of these plans, there have been many complaints about expired drugs and shortages in RRH’s. For this purpose, a third variable was important because it served the role of mediation. A questionnaire was used to obtain information on perceptions of 206 respondents who were selected using simple random sampling. 8 key informant interviews were held, 2 in each RRH. 4 Focus Group Discussions were held, 1 for each RRH, and between 5 and 8 staff took part as discussants for approximately three hours. The findings suggested that strategic planning was affected by funding to approximately 34% while the relationship between funding and procurement was 35%. The direct relationship between strategic planning and procurement was 18%. However when the total causal effect was computed it turned out that strategic planning and the related variable of funding contributed 77% to procurement of medicals under the current hierarchical model where MOH is charged with development of strategic plans for the entire health sector. Since even with this contribution there were complaints, the study proposed a new model called CALF which according to a simulation, if adopted by MOH, strategic planning would contribute 87% to effectiveness in procurement of medicals. PMID:28299158

  1. A Study on Strategic Planning and Procurement of Medicals in Uganda's Regional Referral Hospitals.

    PubMed

    Masembe, Ishak Kamaradi

    2016-12-31

    This study was an analysis of the effect of strategic planning on procurement of medicals in Uganda's regional referral hospitals (RRH's). Medicals were defined as essential medicines, medical devices and medical equipment. The Ministry of Health (MOH) has been carrying out strategic planning for the last 15 years via the Health Sector Strategic Plans. Their assumption was that strategic planning would translate to strategic procurement and consequently, availability of medicals in the RRH's. However, despite the existence of these plans, there have been many complaints about expired drugs and shortages in RRH's. For this purpose, a third variable was important because it served the role of mediation. A questionnaire was used to obtain information on perceptions of 206 respondents who were selected using simple random sampling. 8 key informant interviews were held, 2 in each RRH. 4 Focus Group Discussions were held, 1 for each RRH, and between 5 and 8 staff took part as discussants for approximately three hours. The findings suggested that strategic planning was affected by funding to approximately 34% while the relationship between funding and procurement was 35%. The direct relationship between strategic planning and procurement was 18%. However when the total causal effect was computed it turned out that strategic planning and the related variable of funding contributed 77% to procurement of medicals under the current hierarchical model where MOH is charged with development of strategic plans for the entire health sector. Since even with this contribution there were complaints, the study proposed a new model called CALF which according to a simulation, if adopted by MOH, strategic planning would contribute 87% to effectiveness in procurement of medicals.

  2. Revolution or Evolution: Combined Arms Warfare in the Twenty-First Century

    DTIC Science & Technology

    1999-06-04

    of the relationship of technological development and its impact on the military . Technology has always been a major factor in the initiation, execution...this technology revolutionized warfare? 1 There has been much argument lately that the U.S. Army is participating in the latest revolution in military ...their impact on strategic concerns: The emergence of technology that has military applications is accelerating, but revolutionary changes in military

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.

  4. Epistolary and Expository Interaction Patterns in a Computer Conference Transcript.

    ERIC Educational Resources Information Center

    Fahy, Patrick J.

    2002-01-01

    Discusses the relationship of gender and discourse types, including epistolary and expository, in computer-mediated communication such as listservs. Describes a study that used transcript analysis to determine whether gender patterns could be detected in an online graduate course and considers the strategic value of discourse styles in group…

  5. Reflections on a Strategic Vision for Computer Network Operations

    DTIC Science & Technology

    2010-05-25

    either a traditional or an irregular war. It cannot include the disarmament or destruction of enemy forces or the occupation of its geographic territory...Washington, DC: Chairman of the Joint Chiefs of Staff, 15 August 2007), GL-7. 34 Mr. John Mense , Basic Computer Network Operations Planners Course

  6. Two Studies Examining Argumentation in Asynchronous Computer Mediated Communication

    ERIC Educational Resources Information Center

    Joiner, Richard; Jones, Sarah; Doherty, John

    2008-01-01

    Asynchronous computer mediated communication (CMC) would seem to be an ideal medium for supporting development in student argumentation. This paper investigates this assumption through two studies. The first study compared asynchronous CMC with face-to-face discussions. The transactional and strategic level of the argumentation (i.e. measures of…

  7. Maintaining Pedagogical Integrity of a Computer Mediated Course Delivery in Social Foundations

    ERIC Educational Resources Information Center

    Stewart, Shelley; Cobb-Roberts, Deirdre; Shircliffe, Barbara J.

    2013-01-01

    Transforming a face to face course to a computer mediated format in social foundations (interdisciplinary field in education), while maintaining pedagogical integrity, involves strategic collaboration between instructional technologists and content area experts. This type of planned partnership requires open dialogue and a mutual respect for prior…

  8. The Talent Development Middle School. An Elective Replacement Approach to Providing Extra Help in Math--The CATAMA Program (Computer- and Team-Assisted Mathematics Acceleration). Report No. 21.

    ERIC Educational Resources Information Center

    Mac Iver, Douglas J.; Balfanz, Robert; Plank, Stephen B.

    In Talent Development Middle Schools, students needing extra help in mathematics participate in the Computer- and Team-Assisted Mathematics Acceleration (CATAMA) course. CATAMA is an innovative combination of computer-assisted instruction and structured cooperative learning that students receive in addition to their regular math course for about…

  9. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  10. Fermilab | Tritium at Fermilab | Frequently asked questions

    Science.gov Websites

    computing Quantum initiatives Research and development Key discoveries Benefits of particle physics Particle Accelerators Leading accelerator technology Accelerator complex Illinois Accelerator Research Center Fermilab questions about tritium Tritium in surface water Indian Creek Kress Creek Ferry Creek Tritium in sanitary

  11. Computed lateral rate and acceleration power spectral response of conventional and STOL airplanes to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1975-01-01

    Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.

  12. Fast hydrological model calibration based on the heterogeneous parallel computing accelerated shuffled complex evolution method

    NASA Astrophysics Data System (ADS)

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke

    2018-01-01

    Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.

  13. Jefferson Lab Virtual Tour

    ScienceCinema

    None

    2018-01-16

    Take a virtual tour of the campus of Thomas Jefferson National Accelerator Facility. You can see inside our two accelerators, three experimental areas, accelerator component fabrication and testing areas, high-performance computing areas and laser labs.

  14. Generation of nanosecond neutron pulses in vacuum accelerating tubes

    NASA Astrophysics Data System (ADS)

    Didenko, A. N.; Shikanov, A. E.; Rashchikov, V. I.; Ryzhkov, V. I.; Shatokhin, V. L.

    2014-06-01

    The generation of neutron pulses with a duration of 1-100 ns using small vacuum accelerating tubes is considered. Two physical models of acceleration of short deuteron bunches in pulse neutron generators are described. The dependences of an instantaneous neutron flux in accelerating tubes on the parameters of pulse neutron generators are obtained using computer simulation. The results of experimental investigation of short-pulse neutron generators based on the accelerating tube with a vacuum-arc deuteron source, connected in the circuit with a discharge peaker, and an accelerating tube with a laser deuteron source, connected according to the Arkad'ev-Marx circuit, are given. In the experiments, the neutron yield per pulse reached 107 for a pulse duration of 10-100 ns. The resultant experimental data are in satisfactory agreement with the results of computer simulation.

  15. Role of the superior colliculus in choosing mixed-strategy saccades.

    PubMed

    Thevarajah, Dhushan; Mikulić, Areh; Dorris, Michael C

    2009-02-18

    Game theory outlines optimal response strategies during mixed-strategy competitions. The neural processes involved in choosing individual strategic actions, however, remain poorly understood. Here, we tested whether the superior colliculus (SC), a brain region critical for generating sensory-guided saccades, is also involved in choosing saccades under strategic conditions. Monkeys were free to choose either of two saccade targets as they competed against a computer opponent during the mixed-strategy game "matching pennies." The accuracy with which presaccadic SC activity predicted upcoming choice gradually increased in the time leading up to the saccade. Probing the SC with suprathreshold stimulation demonstrated that these evolving signals were functionally involved in preparing strategic saccades. Finally, subthreshold stimulation of the SC increased the likelihood that contralateral saccades were selected. Together, our results suggest that motor regions of the brain play an active role in choosing strategic actions rather than passively executing those prespecified by upstream executive regions.

  16. Stratway: A Modular Approach to Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Hagen, George E.; Butler, Ricky W.; Maddalon, Jeffrey M.

    2011-01-01

    In this paper we introduce Stratway, a modular approach to finding long-term strategic resolutions to conflicts between aircraft. The modular approach provides both advantages and disadvantages. Our primary concern is to investigate the implications on the verification of safety-critical properties of a strategic resolution algorithm. By partitioning the problem into verifiable modules much stronger verification claims can be established. Since strategic resolution involves searching for solutions over an enormous state space, Stratway, like most similar algorithms, searches these spaces by applying heuristics, which present especially difficult verification challenges. An advantage of a modular approach is that it makes a clear distinction between the resolution function and the trajectory generation function. This allows the resolution computation to be independent of any particular vehicle. The Stratway algorithm was developed in both Java and C++ and is available through a open source license. Additionally there is a visualization application that is helpful when analyzing and quickly creating conflict scenarios.

  17. Rethinking Innovation: Disruptive Technology and Strategic Response

    DTIC Science & Technology

    2005-04-01

    Computer News, a magazine on public sector information technology. Pierce teaches classes at the Naval Postgraduate School and recently authored Warfighting and Disruptive Technologies : Disguising Innovation.

  18. Research for the Fluid Field of the Centrifugal Compressor Impeller in Accelerating Startup

    NASA Astrophysics Data System (ADS)

    Li, Xiaozhu; Chen, Gang; Zhu, Changyun; Qin, Guoliang

    2013-03-01

    In order to study the flow field in the impeller in the accelerating start-up process of centrifugal compressor, the 3-D and 1-D transient accelerated flow governing equations along streamline in the impeller of the centrifugal compressor are derived in detail, the assumption of pressure gradient distribution is presented, and the solving method for 1-D transient accelerating flow field is given based on the assumption. The solving method is achieved by programming and the computing result is obtained. It is obtained by comparison that the computing method is met with the test result. So the feasibility and effectiveness for solving accelerating start-up problem of centrifugal compressor by the solving method in this paper is proven.

  19. MALVAC 2012 scientific forum: accelerating development of second-generation malaria vaccines

    PubMed Central

    2012-01-01

    The World Health Organization (WHO) convened a malaria vaccines committee (MALVAC) scientific forum from 20 to 21 February 2012 in Geneva, Switzerland, to review the global malaria vaccine portfolio, to gain consensus on approaches to accelerate second-generation malaria vaccine development, and to discuss the need to update the vision and strategic goal of the Malaria Vaccine Technology Roadmap. This article summarizes the forum, which included reviews of leading Plasmodium falciparum vaccine candidates for pre-erythrocytic vaccines, blood-stage vaccines, and transmission-blocking vaccines. Other major topics included vaccine candidates against Plasmodium vivax, clinical trial site capacity development in Africa, trial design considerations for a second-generation malaria vaccine, adjuvant selection, and regulatory oversight functions including vaccine licensure. PMID:23140365

  20. Accelerator Facilities for Radiation Research

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.

    1999-01-01

    HSRP Goals in Accelerator Use and Development are: 1.Need for ground-based heavy ion and proton facility to understand space radiation effects discussed most recently by NAS/NRC Report (1996). 2. Strategic Program Goals in facility usage and development: -(1) operation of AGS for approximately 600 beam hours/year; (2) operation of Loma Linda University (LLU) proton facility for approximately 400 beam hours/year; (3) construction of BAF facility; and (4) collaborative research at HIMAC in Japan and with other existing or potential international facilities. 3. MOA with LLU has been established to provide proton beams with energies of 40-250 important for trapped protons and solar proton events. 4. Limited number of beam hours available at Brookhaven National Laboratory's (BNL) Alternating Gradient Synchrotron (AGS).

  1. Facilitating Analysis of Multiple Partial Data Streams

    NASA Technical Reports Server (NTRS)

    Maimone, Mark W.; Liebersbach, Robert R.

    2008-01-01

    Robotic Operations Automation: Mechanisms, Imaging, Navigation report Generation (ROAMING) is a set of computer programs that facilitates and accelerates both tactical and strategic analysis of time-sampled data especially the disparate and often incomplete streams of Mars Explorer Rover (MER) telemetry data described in the immediately preceding article. As used here, tactical refers to the activities over a relatively short time (one Martian day in the original MER application) and strategic refers to a longer time (the entire multi-year MER missions in the original application). Prior to installation, ROAMING must be configured with the types of data of interest, and parsers must be modified to understand the format of the input data (many example parsers are provided, including for general CSV files). Thereafter, new data from multiple disparate sources are automatically resampled into a single common annotated spreadsheet stored in a readable space-separated format, and these data can be processed or plotted at any time scale. Such processing or plotting makes it possible to study not only the details of a particular activity spanning only a few seconds, but also longer-term trends. ROAMING makes it possible to generate mission-wide plots of multiple engineering quantities [e.g., vehicle tilt as in Figure 1(a), motor current, numbers of images] that, heretofore could be found only in thousands of separate files. ROAMING also supports automatic annotation of both images and graphs. In the MER application, labels given to terrain features by rover scientists and engineers are automatically plotted in all received images based on their associated camera models (see Figure 2), times measured in seconds are mapped to Mars local time, and command names or arbitrary time-labeled events can be used to label engineering plots, as in Figure 1(b).

  2. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers

    PubMed Central

    Filipovic, Nenad D.

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851

  3. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.

    PubMed

    Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.

  4. War gaming for strategic and tactical nuclear warfare. January 1970-January 1988 (citations from the NTIS data base). Report for January 1970-January 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1988-01-01

    This bibliography contains citations concerning non-quick war gaming for strategic and tactical nuclear warfare. Analyses and comparative evaluations, based upon computerized simulations, are considered as are manuals and specification for the various computer programs employed. Stage 64 and Satan II and III are covered prominently. (This updated bibliography contains 356 citations, 36 of which are new entries to the previous edition.)

  5. Computation of Material Demand in the Risk Assessment and Mitigation Framework for Strategic Materials (RAMF-SM) Process

    DTIC Science & Technology

    2015-08-01

    Congress concerning requirements for the National Defense Stockpile (NDS) of strategic and critical non- fuel materials. 1 RAMF-SM, which was...critical non- fuel materials. The NDS was established in the World War II era and has been managed by the Department of Defense (DOD) since 1988. By...Department of the Interior. An alternative algorithm is used for materials with intensive defense demands. v Contents 1 .  Introduction

  6. Design and simulation of a descent controller for strategic four-dimensional aircraft navigation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Lax, F. M.

    1975-01-01

    A time-controlled navigation system applicable to the descent phase of flight for airline transport aircraft was developed and simulated. The design incorporates the linear discrete-time sampled-data version of the linearized continuous-time system describing the aircraft's aerodynamics. Using optimal linear quadratic control techniques, an optimal deterministic control regulator which is implementable on an airborne computer is designed. The navigation controller assists the pilot in complying with assigned times of arrival along a four-dimensional flight path in the presence of wind disturbances. The strategic air traffic control concept is also described, followed by the design of a strategic control descent path. A strategy for determining possible times of arrival at specified waypoints along the descent path and for generating the corresponding route-time profiles that are within the performance capabilities of the aircraft is presented. Using a mathematical model of the Boeing 707-320B aircraft along with a Boeing 707 cockpit simulator interfaced with an Adage AGT-30 digital computer, a real-time simulation of the complete aircraft aerodynamics was achieved. The strategic four-dimensional navigation controller for longitudinal dynamics was tested on the nonlinear aircraft model in the presence of 15, 30, and 45 knot head-winds. The results indicate that the controller preserved the desired accuracy and precision of a time-controlled aircraft navigation system.

  7. Identifying and Developing Leadership Competencies in Health Research Organizations: A Pilot Study

    PubMed Central

    Davidson, Pamela L.; Azziz, Ricardo; Morrison, James; Rocha, Janet; Braun, Jonathan

    2018-01-01

    We investigated leadership competencies for developing senior and emerging leaders and the perceived effectiveness of leadership development programs in Health Research Organizations (HROs). A pilot study was conducted to interview HRO executives in Southern California. Respondents represented different organizational contexts to ensure a diverse overview of strategic issues, competencies, and development needs. We analyzed qualitative and quantitative data using an innovative framework for analyzing HRO leadership development. The National Center for Healthcare Leadership ‘Health Leadership Competency Model’ was used as the foundation of our competency research. Top strategic issues included economic downturn and external funding, the influence of governmental policies and regulations, operating in global markets, and forming strategic alliances. High priority NCHL leadership competencies required to successfully lead an HRO include talent development, collaboration, strategic orientation, and team leadership. Senior executives need financial skills and scientific achievement; emerging leaders need technical/scientific competence, information seeking, and a strong work ethic. About half of the respondents reported having no leadership development program (LDP). Almost all reported their organization encourages mentoring, but less than one-third reported an active formalized mentoring program. We conclude that uncertainties and challenges related to healthcare reform and the continued budget deficits will require HRO restructuring to contain costs, remove barriers to innovation, and show value-add in accelerating discovery to improve clinical care, patient outcomes, and community health. Successful leaders will need to become more strategic, entrepreneurial, and resourceful in developing research alliances, executing research operations, and continually improving performance at all levels of the HRO. PMID:29749995

  8. galario: Gpu Accelerated Library for Analyzing Radio Interferometer Observations

    NASA Astrophysics Data System (ADS)

    Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo

    2017-10-01

    The galario library exploits the computing power of modern graphic cards (GPUs) to accelerate the comparison of model predictions to radio interferometer observations. It speeds up the computation of the synthetic visibilities given a model image (or an axisymmetric brightness profile) and their comparison to the observations.

  9. Computer-Assisted Learning in Elementary Reading: A Randomized Control Trial

    ERIC Educational Resources Information Center

    Shannon, Lisa Cassidy; Styers, Mary Koenig; Wilkerson, Stephanie Baird; Peery, Elizabeth

    2015-01-01

    This study evaluated the efficacy of Accelerated Reader, a computer-based learning program, at improving student reading. Accelerated Reader is a progress-monitoring, assessment, and practice tool that supports classroom instruction and guides independent reading. Researchers used a randomized controlled trial to evaluate the program with 344…

  10. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.

  11. Strategic R&D transactions in personalized drug development.

    PubMed

    Makino, Tomohiro; Lim, Yeongjoo; Kodama, Kota

    2018-03-21

    Although external collaboration capability influences the development of personalized medicine, key transactions in the pharmaceutical industry have not been addressed. To explore specific trends in interorganizational transactions and key players, we longitudinally surveyed strategic transactions, comparing them with other advanced medical developments, such as antibody therapy, as controls. We found that the financing deals of start-ups have surged over the past decade, accelerating intellectual property (IP) creation. Our correlation and regression analyses identified determinants of financing deals among alliance deals, acquisition deals, patents, research and development (R&D) licenses, market licenses, and scientific papers. They showed that patents positively correlated with transactions, and that the number of R&D licenses significantly predicted financing deals. This indicates, for the first time, that start-ups and investors lead progress in personalized medicine. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Computational Benefits Using an Advanced Concatenation Scheme Based on Reduced Order Models for RF Structures

    NASA Astrophysics Data System (ADS)

    Heller, Johann; Flisgen, Thomas; van Rienen, Ursula

    The computation of electromagnetic fields and parameters derived thereof for lossless radio frequency (RF) structures filled with isotropic media is an important task for the design and operation of particle accelerators. Unfortunately, these computations are often highly demanding with regard to computational effort. The entire computational demand of the problem can be reduced using decomposition schemes in order to solve the field problems on standard workstations. This paper presents one of the first detailed comparisons between the recently proposed state-space concatenation approach (SSC) and a direct computation for an accelerator cavity with coupler-elements that break the rotational symmetry.

  13. Strategic Air Traffic Planning Using Eulerian Route Based Modeling and Optimization

    NASA Astrophysics Data System (ADS)

    Bombelli, Alessandro

    Due to a soaring air travel growth in the last decades, air traffic management has become increasingly challenging. As a consequence, planning tools are being devised to help human decision-makers achieve a better management of air traffic. Planning tools are divided into two categories, strategic and tactical. Strategic planning generally addresses a larger planning domain and is performed days to hours in advance. Tactical planning is more localized and is performed hours to minutes in advance. An aggregate route model for strategic air traffic flow management is presented. It is an Eulerian model, describing the flow between cells of unidirectional point-to-point routes. Aggregate routes are created from flight trajectory data based on similarity measures. Spatial similarity is determined using the Frechet distance. The aggregate routes approximate actual well-traveled traffic patterns. By specifying the model resolution, an appropriate balance between model accuracy and model dimension can be achieved. For a particular planning horizon, during which weather is expected to restrict the flow, a procedure for designing airborne reroutes and augmenting the traffic flow model is developed. The dynamics of the traffic flow on the resulting network take the form of a discrete-time, linear time-invariant system. The traffic flow controls are ground holding, pre-departure rerouting and airborne rerouting. Strategic planning--determining how the controls should be used to modify the future traffic flow when local capacity violations are anticipated--is posed as an integer programming problem of minimizing a weighted sum of flight delays subject to control and capacity constraints. Several tests indicate the effectiveness of the modeling and strategic planning approach. In the final, most challenging, test, strategic planning is demonstrated for the six western-most Centers of the 22-Center national airspace. The planning time horizon is four hours long, and there is weather predicted that causes significant delays to the scheduled flights. Airborne reroute options are computed and added to the route model, and it is shown that the predicted delays can be significantly reduced. The test results also indicate the computational feasibility of the approach for a planning problem of this size.

  14. Particle tracking acceleration via signed distance fields in direct-accelerated geometry Monte Carlo

    DOE PAGES

    Shriwise, Patrick C.; Davis, Andrew; Jacobson, Lucas J.; ...

    2017-08-26

    Computer-aided design (CAD)-based Monte Carlo radiation transport is of value to the nuclear engineering community for its ability to conduct transport on high-fidelity models of nuclear systems, but it is more computationally expensive than native geometry representations. This work describes the adaptation of a rendering data structure, the signed distance field, as a geometric query tool for accelerating CAD-based transport in the direct-accelerated geometry Monte Carlo toolkit. Demonstrations of its effectiveness are shown for several problems. The beginnings of a predictive model for the data structure's utilization based on various problem parameters is also introduced.

  15. Towards SDS (Strategic Defense System) Testing and Evaluation: A collection of Relevant Topics

    DTIC Science & Technology

    1989-07-01

    the proof of the next. 89 The Piton project is the first instance of stacking.two verified components. In 1985 Warren...Accelerated? In the long term, a vast amount of work needs to be done. Below are some miscellaneous, fairly near term projects which would seem to provide...and predictions for the current project . It provides a quantitative analysis of the environment and a model of the

  16. A Neural Mechanism of Strategic Social Choice under Sanction-Induced Norm Compliance

    PubMed

    Makwana, Aidan; Grön, Georg; Fehr, Ernst; Hare, Todd A

    2015-01-01

    In recent years, much has been learned about the representation of subjective value in simple, nonstrategic choices. However, a large fraction of our daily decisions are embedded in social interactions in which value guided decisions require balancing benefits for self against consequences imposed by others in response to our choices. Yet, despite their ubiquity, much less is known about how value computation takes place in strategic social contexts that include the possibility of retribution for norm violations. Here, we used functional magnetic resonance imaging (fMRI) to show that when human subjects face such a context connectivity increases between the temporoparietal junction (TPJ), implicated in the representation of other peoples' thoughts and intentions, and regions of ventromedial prefrontal cortex (vmPFC) that are associated with value computation. In contrast, we find no increase in connectivity between these regions in social nonstrategic cases where decision-makers are immune from retributive monetary punishments from a human partner. Moreover, there was also no increase in TPJ-vmPFC connectivity when the potential punishment was performed by a computer programmed to punish fairness norm violations in the same manner as a human would. Thus, TPJ-vmPFC connectivity is not simply a function of the social or norm enforcing nature of the decision, but rather occurs specifically in situations where subjects make decisions in a social context and strategically consider putative consequences imposed by others.

  17. Software package for modeling spin-orbit motion in storage rings

    NASA Astrophysics Data System (ADS)

    Zyuzin, D. V.

    2015-12-01

    A software package providing a graphical user interface for computer experiments on the motion of charged particle beams in accelerators, as well as analysis of obtained data, is presented. The software package was tested in the framework of the international project on electric dipole moment measurement JEDI (Jülich Electric Dipole moment Investigations). The specific features of particle spin motion imply the requirement to use a cyclic accelerator (storage ring) consisting of electrostatic elements, which makes it possible to preserve horizontal polarization for a long time. Computer experiments study the dynamics of 106-109 particles in a beam during 109 turns in an accelerator (about 1012-1015 integration steps for the equations of motion). For designing an optimal accelerator structure, a large number of computer experiments on polarized beam dynamics are required. The numerical core of the package is COSY Infinity, a program for modeling spin-orbit dynamics.

  18. An acceleration framework for synthetic aperture radar algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.

    2017-04-01

    Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.

  19. Final Report on Institutional Computing Project s15_hilaserion, “Kinetic Modeling of Next-Generation High-Energy, High-Intensity Laser-Ion Accelerators as an Enabling Capability”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albright, Brian James; Yin, Lin; Stark, David James

    This proposal sought of order 1M core-hours of Institutional Computing time intended to enable computing by a new LANL Postdoc (David Stark) working under LDRD ER project 20160472ER (PI: Lin Yin) on laser-ion acceleration. The project was “off-cycle,” initiating in June of 2016 with a postdoc hire.

  20. Institute for scientific computing research;fiscal year 1999 annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, D

    2000-03-28

    Large-scale scientific computation, and all of the disciplines that support it and help to validate it, have been placed at the focus of Lawrence Livermore National Laboratory by the Accelerated Strategic Computing Initiative (ASCI). The Laboratory operates the computer with the highest peak performance in the world and has undertaken some of the largest and most compute-intensive simulations ever performed. Computers at the architectural extremes, however, are notoriously difficult to use efficiently. Even such successes as the Laboratory's two Bell Prizes awarded in November 1999 only emphasize the need for much better ways of interacting with the results of large-scalemore » simulations. Advances in scientific computing research have, therefore, never been more vital to the core missions of the Laboratory than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, the Laboratory must engage researchers at many academic centers of excellence. In FY 1999, the Institute for Scientific Computing Research (ISCR) has expanded the Laboratory's bridge to the academic community in the form of collaborative subcontracts, visiting faculty, student internships, a workshop, and a very active seminar series. ISCR research participants are integrated almost seamlessly with the Laboratory's Center for Applied Scientific Computing (CASC), which, in turn, addresses computational challenges arising throughout the Laboratory. Administratively, the ISCR flourishes under the Laboratory's University Relations Program (URP). Together with the other four Institutes of the URP, it must navigate a course that allows the Laboratory to benefit from academic exchanges while preserving national security. Although FY 1999 brought more than its share of challenges to the operation of an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and well worth the continued effort. A change of administration for the ISCR occurred during FY 1999. Acting Director John Fitzgerald retired from LLNL in August after 35 years of service, including the last two at helm of the ISCR. David Keyes, who has been a regular visitor in conjunction with ASCI scalable algorithms research since October 1997, overlapped with John for three months and serves half-time as the new Acting Director.« less

  1. Modern Computational Techniques for the HMMER Sequence Analysis

    PubMed Central

    2013-01-01

    This paper focuses on the latest research and critical reviews on modern computing architectures, software and hardware accelerated algorithms for bioinformatics data analysis with an emphasis on one of the most important sequence analysis applications—hidden Markov models (HMM). We show the detailed performance comparison of sequence analysis tools on various computing platforms recently developed in the bioinformatics society. The characteristics of the sequence analysis, such as data and compute-intensive natures, make it very attractive to optimize and parallelize by using both traditional software approach and innovated hardware acceleration technologies. PMID:25937944

  2. DeepX: Deep Learning Accelerator for Restricted Boltzmann Machine Artificial Neural Networks.

    PubMed

    Kim, Lok-Won

    2018-05-01

    Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).

  3. Method for computationally efficient design of dielectric laser accelerator structures

    DOE PAGES

    Hughes, Tyler; Veronis, Georgios; Wootton, Kent P.; ...

    2017-06-22

    Here, dielectric microstructures have generated much interest in recent years as a means of accelerating charged particles when powered by solid state lasers. The acceleration gradient (or particle energy gain per unit length) is an important figure of merit. To design structures with high acceleration gradients, we explore the adjoint variable method, a highly efficient technique used to compute the sensitivity of an objective with respect to a large number of parameters. With this formalism, the sensitivity of the acceleration gradient of a dielectric structure with respect to its entire spatial permittivity distribution is calculated by the use of onlymore » two full-field electromagnetic simulations, the original and ‘adjoint’. The adjoint simulation corresponds physically to the reciprocal situation of a point charge moving through the accelerator gap and radiating. Using this formalism, we perform numerical optimizations aimed at maximizing acceleration gradients, which generate fabricable structures of greatly improved performance in comparison to previously examined geometries.« less

  4. Improving Royal Australian Air Force Strategic Airlift Planning by Application of a Computer Based Management Information System

    DTIC Science & Technology

    1991-12-01

    AUSTRALIAN AIR FORCE STRATEGIC AIRLIFT PLANNING bY APPLICATION OF A COMPTER BASED MANAGEMENT INFO4ATION SYSTEM THESIS Presented to the Faculty of the...Master of Science in Information Management Neil A. Cooper, BBus Squadron Leader, RAAF December 1991 Approved for public release; distribution unlimited...grateful to the time and honest views given to me by the ADANS manager , Lieutenant Colonel Charlie Davis. For my Canadian research, I relied on the

  5. Muscle contributions to the acceleration of the whole body centre of mass during recovery from forward loss of balance by stepping in young and older adults.

    PubMed

    Graham, David F; Carty, Christopher P; Lloyd, David G; Barrett, Rod S

    2017-01-01

    The purpose of this study was to determine the muscular contributions to the acceleration of the whole body centre of mass (COM) of older compared to younger adults that were able to recover from forward loss of balance with a single step. Forward loss of balance was achieved by releasing participants (14 older adults and 6 younger adults) from a static whole-body forward lean angle of approximately 18 degrees. 10 older adults and 6 younger adults were able to recover with a single step and included in subsequent analysis. A scalable anatomical model consisting of 36 degrees-of-freedom was used to compute kinematics and joint moments from motion capture and force plate data. Forces for 92 muscle actuators were computed using Static Optimisation and Induced Acceleration Analysis was used to compute individual muscle contributions to the three-dimensional acceleration of the whole body COM. There were no significant differences between older and younger adults in step length, step time, 3D COM accelerations or muscle contributions to 3D COM accelerations. The stance and stepping leg Gastrocnemius and Soleus muscles were primarily responsible for the vertical acceleration experienced by the COM. The Gastrocnemius and Soleus from the stance side leg together with bilateral Hamstrings accelerated the COM forwards throughout balance recovery while the Vasti and Soleus of the stepping side leg provided the majority of braking accelerations following foot contact. The Hip Abductor muscles provided the greatest contribution to medial-lateral accelerations of the COM. Deficits in the neuromuscular control of the Gastrocnemius, Soleus, Vasti and Hip Abductors in particular could adversely influence balance recovery and may be important targets in interventions to improve balance recovery performance.

  6. Muscle contributions to the acceleration of the whole body centre of mass during recovery from forward loss of balance by stepping in young and older adults

    PubMed Central

    Graham, David F.; Carty, Christopher P.; Lloyd, David G.

    2017-01-01

    The purpose of this study was to determine the muscular contributions to the acceleration of the whole body centre of mass (COM) of older compared to younger adults that were able to recover from forward loss of balance with a single step. Forward loss of balance was achieved by releasing participants (14 older adults and 6 younger adults) from a static whole-body forward lean angle of approximately 18 degrees. 10 older adults and 6 younger adults were able to recover with a single step and included in subsequent analysis. A scalable anatomical model consisting of 36 degrees-of-freedom was used to compute kinematics and joint moments from motion capture and force plate data. Forces for 92 muscle actuators were computed using Static Optimisation and Induced Acceleration Analysis was used to compute individual muscle contributions to the three-dimensional acceleration of the whole body COM. There were no significant differences between older and younger adults in step length, step time, 3D COM accelerations or muscle contributions to 3D COM accelerations. The stance and stepping leg Gastrocnemius and Soleus muscles were primarily responsible for the vertical acceleration experienced by the COM. The Gastrocnemius and Soleus from the stance side leg together with bilateral Hamstrings accelerated the COM forwards throughout balance recovery while the Vasti and Soleus of the stepping side leg provided the majority of braking accelerations following foot contact. The Hip Abductor muscles provided the greatest contribution to medial-lateral accelerations of the COM. Deficits in the neuromuscular control of the Gastrocnemius, Soleus, Vasti and Hip Abductors in particular could adversely influence balance recovery and may be important targets in interventions to improve balance recovery performance. PMID:29069097

  7. On the upscaling of process-based models in deltaic applications

    NASA Astrophysics Data System (ADS)

    Li, L.; Storms, J. E. A.; Walstra, D. J. R.

    2018-03-01

    Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.

  8. Numerical Nudging: Using an Accelerating Score to Enhance Performance.

    PubMed

    Shen, Luxi; Hsee, Christopher K

    2017-08-01

    People often encounter inherently meaningless numbers, such as scores in health apps or video games, that increase as they take actions. This research explored how the pattern of change in such numbers influences performance. We found that the key factor is acceleration-namely, whether the number increases at an increasing velocity. Six experiments in both the lab and the field showed that people performed better on an ongoing task if they were presented with a number that increased at an increasing velocity than if they were not presented with such a number or if they were presented with a number that increased at a decreasing or constant velocity. This acceleration effect occurred regardless of the absolute magnitude or the absolute velocity of the number, and even when the number was not tied to any specific rewards. This research shows the potential of numerical nudging-using inherently meaningless numbers to strategically alter behaviors-and is especially relevant in the present age of digital devices.

  9. Unsteady Aerodynamic Force Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2016-01-01

    A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm. A cantilevered rectangular wing built and tested at the NASA Langley Research Center (Hampton, Virginia, USA) in 1959 is used to validate the simple approach. Unsteady aerodynamic forces as well as wing deflections, velocities, accelerations, and strains are computed using the CFL3D computational fluid dynamics (CFD) code and an MSC/NASTRAN code (MSC Software Corporation, Newport Beach, California, USA), and these CFL3D-based results are assumed as measured quantities. Based on the measured strains, wing deflections, velocities, accelerations, and aerodynamic forces are computed using the proposed approach. These computed deflections, velocities, accelerations, and unsteady aerodynamic forces are compared with the CFL3D/NASTRAN-based results. In general, computed aerodynamic forces based on the lifting surface theory in subsonic speeds are in good agreement with the target aerodynamic forces generated using CFL3D code with the Euler equation. Excellent aeroelastic responses are obtained even with unsteady strain data under the signal to noise ratio of -9.8dB. The deflections, velocities, and accelerations at each sensor location are independent of structural and aerodynamic models. Therefore, the distributed strain data together with the current proposed approaches can be used as distributed deflection, velocity, and acceleration sensors. This research demonstrates the feasibility of obtaining induced drag and lift forces through the use of distributed sensor technology with measured strain data. An active induced drag control system thus can be designed using the two computed aerodynamic forces, induced drag and lift, to improve the fuel efficiency of an aircraft. Interpolation elements between structural finite element grids and the CFD grids and centroids are successfully incorporated with the unsteady aeroelastic computation scheme. The most critical technology for the success of the proposed approach is the robust on-line parameter estimator, since the least-squares curve fitting method depends heavily on aeroelastic system frequencies and damping factors.

  10. Particle Identification on an FPGA Accelerated Compute Platform for the LHCb Upgrade

    NASA Astrophysics Data System (ADS)

    Fäerber, Christian; Schwemmer, Rainer; Machen, Jonathan; Neufeld, Niko

    2017-07-01

    The current LHCb readout system will be upgraded in 2018 to a “triggerless” readout of the entire detector at the Large Hadron Collider collision rate of 40 MHz. The corresponding bandwidth from the detector down to the foreseen dedicated computing farm (event filter farm), which acts as the trigger, has to be increased by a factor of almost 100 from currently 500 Gb/s up to 40 Tb/s. The event filter farm will preanalyze the data and will select the events on an event by event basis. This will reduce the bandwidth down to a manageable size to write the interesting physics data to tape. The design of such a system is a challenging task, and the reason why different new technologies are considered and have to be investigated for the different parts of the system. For the usage in the event building farm or in the event filter farm (trigger), an experimental field programmable gate array (FPGA) accelerated computing platform is considered and, therefore, tested. FPGA compute accelerators are used more and more in standard servers such as for Microsoft Bing search or Baidu search. The platform we use hosts a general Intel CPU and a high-performance FPGA linked via the high-speed Intel QuickPath Interconnect. An accelerator is implemented on the FPGA. It is very likely that these platforms, which are built, in general, for high-performance computing, are also very interesting for the high-energy physics community. First, the performance results of smaller test cases performed at the beginning are presented. Afterward, a part of the existing LHCb RICH particle identification is tested and is ported to the experimental FPGA accelerated platform. We have compared the performance of the LHCb RICH particle identification running on a normal CPU with the performance of the same algorithm, which is running on the Xeon-FPGA compute accelerator platform.

  11. Optimizations of Human Restraint Systems for Short-Period Acceleration

    NASA Technical Reports Server (NTRS)

    Payne, P. R.

    1963-01-01

    A restraint system's main function is to restrain its occupant when his vehicle is subjected to acceleration. If the restraint system is rigid and well-fitting (to eliminate slack) then it will transmit the vehicle acceleration to its occupant without modifying it in any way. Few present-day restraint systems are stiff enough to give this one-to-one transmission characteristic, and depending upon their dynamic characteristics and the nature of the vehicle's acceleration-time history, they will either magnify or attenuate the acceleration. Obviously an optimum restraint system will give maximum attenuation of an input acceleration. In the general case of an arbitrary acceleration input, a computer must be used to determine the optimum dynamic characteristics for the restraint system. Analytical solutions can be obtained for certain simple cases, however, and these cases are considered in this paper, after the concept of dynamic models of the human body is introduced. The paper concludes with a description of an analog computer specially developed for the Air Force to handle completely general mechanical restraint optimization programs of this type, where the acceleration input may be any arbitrary function of time.

  12. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Wang, Peng; Plimpton, Steven J

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less

  13. A redshift survey of IRAS galaxies. V - The acceleration on the Local Group

    NASA Technical Reports Server (NTRS)

    Strauss, Michael A.; Yahil, Amos; Davis, Marc; Huchra, John P.; Fisher, Karl

    1992-01-01

    The acceleration on the Local Group is calculated based on a full-sky redshift survey of 5288 galaxies detected by IRAS. A formalism is developed to compute the distribution function of the IRAS acceleration for a given power spectrum of initial perturbations. The computed acceleration on the Local Group points 18-28 deg from the direction of the Local Group peculiar velocity vector. The data suggest that the CMB dipole is indeed due to the motion of the Local Group, that this motion is gravitationally induced, and that the distribution of IRAS galaxies on large scales is related to that of dark matter by a simple linear biasing model.

  14. Correlated histogram representation of Monte Carlo derived medical accelerator photon-output phase space

    DOEpatents

    Schach Von Wittenau, Alexis E.

    2003-01-01

    A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.

  15. The impact of the U.S. supercomputing initiative will be global

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Dona

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  16. Falling Particles: Concept Definition and Capital Cost Estimate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoddard, Larry; Galluzzo, Geoff; Adams, Shannon

    2016-06-30

    The Department of Energy’s (DOE) Office of Renewable Power (ORP) has been tasked to provide effective program management and strategic direction for all of the DOE’s Energy Efficiency & Renewable Energy’s (EERE’s) renewable power programs. The ORP’s efforts to accomplish this mission are aligned with national energy policies, DOE strategic planning, EERE’s strategic planning, Congressional appropriation, and stakeholder advice. ORP is supported by three renewable energy offices, of which one is the Solar Energy Technology Office (SETO) whose SunShot Initiative has a mission to accelerate research, development and large scale deployment of solar technologies in the United States. SETO hasmore » a goal of reducing the cost of Concentrating Solar Power (CSP) by 75 percent of 2010 costs by 2020 to reach parity with base-load energy rates, and to reduce costs 30 percent further by 2030. The SunShot Initiative is promoting the implementation of high temperature CSP with thermal energy storage allowing generation during high demand hours. The SunShot Initiative has funded significant research and development work on component testing, with attention to high temperature molten salts, heliostats, receiver designs, and high efficiency high temperature supercritical CO 2 (sCO2) cycles.« less

  17. Binding and strategic selection in working memory: a lifespan dissociation.

    PubMed

    Sander, Myriam C; Werkle-Bergner, Markus; Lindenberger, Ulman

    2011-09-01

    Working memory (WM) shows a gradual increase during childhood, followed by accelerating decline from adulthood to old age. To examine these lifespan differences more closely, we asked 34 children (10-12 years), 40 younger adults (20-25 years), and 39 older adults (70-75 years) to perform a color change detection task. Load levels and encoding durations were varied for displays including targets only (Experiment 1) or targets plus distracters (Experiment 2, investigating a subsample of Experiment 1). WM performance was lower in older adults and children than in younger adults. Longer presentation times were associated with better performance in all age groups, presumably reflecting increasing effects of strategic selection mechanisms on WM performance. Children outperformed older adults when encoding times were short, and distracter effects were larger in children and older adults than in younger adults. We conclude that strategic selection in WM develops more slowly during childhood than basic binding operations, presumably reflecting the delay in maturation of frontal versus medio-temporal brain networks. In old age, both sets of mechanisms decline, reflecting senescent change in both networks. We discuss similarities to episodic memory development and address open questions for future research.

  18. MIT Laboratory for Computer Science Progress Report, July 1984-June 1985

    DTIC Science & Technology

    1985-06-01

    larger (up to several thousand machines) multiprocessor systems. This facility, funded by the newly formed Strategic Computing Program of the Defense...Szolovits, Group Leader R. Patil Collaborating Investigators M. Criscitiello, M.D., Tufts-New England Medical Center Hospital J. Dzierzanowski, Ph.D., Dept...COMPUTATION STRUCTURES Academic Staff J. B. Dennis, Group Leader Research Staff W. B. Ackerman G. A. Boughton W. Y-P. Lim Graduate Students T-A. Chu S

  19. Large-Scale Calculations for Material Sciences Using Accelerators to Improve Time- and Energy-to-Solution

    DOE PAGES

    Eisenbach, Markus

    2017-01-01

    A major impediment to deploying next-generation high-performance computational systems is the required electrical power, often measured in units of megawatts. The solution to this problem is driving the introduction of novel machine architectures, such as those employing many-core processors and specialized accelerators. In this article, we describe the use of a hybrid accelerated architecture to achieve both reduced time to solution and the associated reduction in the electrical cost for a state-of-the-art materials science computation.

  20. DARPA Concurrent Design/Concurrent Engineering Workshop Held in Key West, Florida on December 6-8, 1988

    DTIC Science & Technology

    1988-12-01

    engineering disciplines. (Here I refer to training in multifunction team mana ement dir’lplines, quality engineering methods, experimental design by such...4001 SSOME ISSUES S• View of strategic issues has been evolving - Speed of design and product deployment - to accelerate experimentation with new...manufacturingprocess design n New technologies (e.g., composites) which can revolutionize prod-uct technical design in some cases Issue still to be faced: " non

  1. Hardware accelerated high performance neutron transport computation based on AGENT methodology

    NASA Astrophysics Data System (ADS)

    Xiao, Shanjie

    The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.

  2. Partnering to develop a talent pipeline for emerging health leaders in operations research.

    PubMed

    Ng, Alfred; Henshaw, Carly; Carter, Michael

    2017-05-01

    In initiating its first central office for Quality Improvement (QI), The Scarborough Hospital (TSH) sought to accelerate momentum towards achieving its "Quality and Sustainability" strategic priority by building internal capacity in the emerging QI specialty of operations research. The Scarborough Hospital reviewed existing models of talent management in conjunction with Lean and improvement philosophies. Through simple guiding principles and in collaboration with the University of Toronto's Centre for Healthcare Engineering, TSH developed a targeted approach to talent management for Operations Research (OR) in the Office of Innovation and Performance Improvement, reduced the time from staffing need to onboarding, accelerated the development of new staff in delivering QI and OR projects, and defined new structures and processes to retain and develop this group of new emerging health leaders.

  3. Final safety analysis report for the Ground Test Accelerator (GTA), Phase 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-10-01

    This document is the third volume of a 3 volume safety analysis report on the Ground Test Accelerator (GTA). The GTA program at the Los Alamos National Laboratory (LANL) is the major element of the national Neutral Particle Beam (NPB) program, which is supported by the Strategic Defense Initiative Office (SDIO). A principal goal of the national NPB program is to assess the feasibility of using hydrogen and deuterium neutral particle beams outside the Earth`s atmosphere. The main effort of the NPB program at Los Alamos concentrates on developing the GTA. The GTA is classified as a low-hazard facility, exceptmore » for the cryogenic-cooling system, which is classified as a moderate-hazard facility. This volume consists of appendices C through U of the report« less

  4. GPU-Accelerated Molecular Modeling Coming Of Age

    PubMed Central

    Stone, John E.; Hardy, David J.; Ufimtsev, Ivan S.

    2010-01-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. PMID:20675161

  5. GPU-accelerated molecular modeling coming of age.

    PubMed

    Stone, John E; Hardy, David J; Ufimtsev, Ivan S; Schulten, Klaus

    2010-09-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. (c) 2010 Elsevier Inc. All rights reserved.

  6. Unstructured LES of Reacting Multiphase Flows in Realistic Gas Turbine Combustors

    NASA Technical Reports Server (NTRS)

    Ham, Frank; Apte, Sourabh; Iaccarino, Gianluca; Wu, Xiao-Hua; Herrmann, Marcus; Constantinescu, George; Mahesh, Krishnan; Moin, Parviz

    2003-01-01

    As part of the Accelerated Strategic Computing Initiative (ASCI) program, an accurate and robust simulation tool is being developed to perform high-fidelity LES studies of multiphase, multiscale turbulent reacting flows in aircraft gas turbine combustor configurations using hybrid unstructured grids. In the combustor, pressurized gas from the upstream compressor is reacted with atomized liquid fuel to produce the combustion products that drive the downstream turbine. The Large Eddy Simulation (LES) approach is used to simulate the combustor because of its demonstrated superiority over RANS in predicting turbulent mixing, which is central to combustion. This paper summarizes the accomplishments of the combustor group over the past year, concentrating mainly on the two major milestones achieved this year: 1) Large scale simulation: A major rewrite and redesign of the flagship unstructured LES code has allowed the group to perform large eddy simulations of the complete combustor geometry (all 18 injectors) with over 100 million control volumes; 2) Multi-physics simulation in complex geometry: The first multi-physics simulations including fuel spray breakup, coalescence, evaporation, and combustion are now being performed in a single periodic sector (1/18th) of an actual Pratt & Whitney combustor geometry.

  7. Building the Human Vaccines Project: strategic management recommendations and summary report of the 15-16 July 2014 business workshop.

    PubMed

    Schenkelberg, Theodore; Kieny, Marie-Paule; Bianco, A E; Koff, Wayne C

    2015-05-01

    The Human Vaccines Project is a bold new initiative, with the goal of solving the principal scientific problem impeding vaccine development for infectious diseases and cancers: the generation of specific, broad, potent and durable immune responses in humans. In the July 2014 workshop, 20 leaders from the public and private sectors came together to give input on strategic business issues for the creation of the Human Vaccines Project. Participants recommended the Project to be established as a nonprofit public-private partnership, structured as a global R&D consortium closely engaged with industrial partners, and located/affiliated with one or more major academic centers conducting vaccine R&D. If successful, participants concluded that the Project could greatly accelerate the development of new and improved vaccines, with the potential to transform disease prevention in the 21st century.

  8. CCSI and the role of advanced computing in accelerating the commercial deployment of carbon capture systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, David; Agarwal, Deborah A.; Sun, Xin

    2011-09-01

    The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.

  9. CCSI and the role of advanced computing in accelerating the commercial deployment of carbon capture systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.; Agarwal, D.; Sun, X.

    2011-01-01

    The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.

  10. Accelerated Reader.

    ERIC Educational Resources Information Center

    Education Commission of the States, Denver, CO.

    This paper provides an overview of Accelerated Reader, a system of computerized testing and record-keeping that supplements the regular classroom reading program. Accelerated Reader's primary goal is to increase literature-based reading practice. The program offers a computer-aided reading comprehension and management program intended to motivate…

  11. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  12. Strategic consequences of emotional misrepresentation in negotiation: The blowback effect.

    PubMed

    Campagna, Rachel L; Mislin, Alexandra A; Kong, Dejun Tony; Bottom, William P

    2016-05-01

    Recent research indicates that expressing anger elicits concession making from negotiating counterparts. When emotions are conveyed either by a computer program or by a confederate, results appear to affirm a long-standing notion that feigning anger is an effective bargaining tactic. We hypothesize this tactic actually jeopardizes postnegotiation deal implementation and subsequent exchange. Four studies directly test both tactical and strategic consequences of emotional misrepresentation. False representations of anger generated little tactical benefit but produced considerable and persistent strategic disadvantage. This disadvantage is because of an effect we call "blowback." A negotiator's misrepresented anger creates an action-reaction cycle that results in genuine anger and diminishes trust in both the negotiator and counterpart. Our findings highlight the importance of considering the strategic implications of emotional misrepresentation for negotiators interested in claiming value. We discuss the benefits of researching reciprocal interdependence between 2 or more negotiating parties and of modeling value creation beyond deal construction to include implementation of terms. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. RIACS

    NASA Technical Reports Server (NTRS)

    Moore, Robert C.

    1998-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: (1) Automated Reasoning. (2) Human-Centered Computing. and (3) High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling.

  14. Software package for modeling spin–orbit motion in storage rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zyuzin, D. V., E-mail: d.zyuzin@fz-juelich.de

    2015-12-15

    A software package providing a graphical user interface for computer experiments on the motion of charged particle beams in accelerators, as well as analysis of obtained data, is presented. The software package was tested in the framework of the international project on electric dipole moment measurement JEDI (Jülich Electric Dipole moment Investigations). The specific features of particle spin motion imply the requirement to use a cyclic accelerator (storage ring) consisting of electrostatic elements, which makes it possible to preserve horizontal polarization for a long time. Computer experiments study the dynamics of 10{sup 6}–10{sup 9} particles in a beam during 10{supmore » 9} turns in an accelerator (about 10{sup 12}–10{sup 15} integration steps for the equations of motion). For designing an optimal accelerator structure, a large number of computer experiments on polarized beam dynamics are required. The numerical core of the package is COSY Infinity, a program for modeling spin–orbit dynamics.« less

  15. Using computer software to improve group decision-making.

    PubMed

    Mockler, R J; Dologite, D G

    1991-08-01

    This article provides a review of some of the work done in the area of knowledge-based systems for strategic planning. Since 1985, with the founding of the Center for Knowledge-based Systems for Business Management, the project has focused on developing knowledge-based systems (KBS) based on these models. In addition, the project also involves developing a variety of computer and non-computer methods and techniques for assisting both technical and non-technical managers and individuals to do decision modelling and KBS development. This paper presents a summary of one segment of the project: a description of integrative groupware useful in strategic planning. The work described here is part of an ongoing research project. As part of this project, for example, over 200 non-technical and technical business managers, most of them working full-time during the project, developed over 160 KBS prototype systems in conjunction with MBA course in strategic planning and management decision making. Based on replies to a survey of this test group, 28 per cent of the survey respondents reported their KBS were used at work, 21 per cent reportedly received promotions, pay rises or new jobs based on their KBS development work, and 12 per cent reported their work led to participation in other KBS development projects at work. All but two of the survey respondents reported that their work on the KBS development project led to a substantial increase in their job knowledge or performance.

  16. A Neural Mechanism of Strategic Social Choice under Sanction-Induced Norm Compliance1,2,3

    PubMed Central

    Makwana, Aidan; Grön, Georg; Fehr, Ernst

    2015-01-01

    Abstract In recent years, much has been learned about the representation of subjective value in simple, nonstrategic choices. However, a large fraction of our daily decisions are embedded in social interactions in which value guided decisions require balancing benefits for self against consequences imposed by others in response to our choices. Yet, despite their ubiquity, much less is known about how value computation takes place in strategic social contexts that include the possibility of retribution for norm violations. Here, we used functional magnetic resonance imaging (fMRI) to show that when human subjects face such a context connectivity increases between the temporoparietal junction (TPJ), implicated in the representation of other peoples’ thoughts and intentions, and regions of ventromedial prefrontal cortex (vmPFC) that are associated with value computation. In contrast, we find no increase in connectivity between these regions in social nonstrategic cases where decision-makers are immune from retributive monetary punishments from a human partner. Moreover, there was also no increase in TPJ-vmPFC connectivity when the potential punishment was performed by a computer programmed to punish fairness norm violations in the same manner as a human would. Thus, TPJ-vmPFC connectivity is not simply a function of the social or norm enforcing nature of the decision, but rather occurs specifically in situations where subjects make decisions in a social context and strategically consider putative consequences imposed by others. PMID:26464981

  17. Accelerated spike resampling for accurate multiple testing controls.

    PubMed

    Harrison, Matthew T

    2013-02-01

    Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.

  18. Derivation of improved load transformation matrices for launchers-spacecraft coupled analysis, and direct computation of margins of safety

    NASA Technical Reports Server (NTRS)

    Klein, M.; Reynolds, J.; Ricks, E.

    1989-01-01

    Load and stress recovery from transient dynamic studies are improved upon using an extended acceleration vector in the modal acceleration technique applied to structural analysis. Extension of the normal LTM (load transformation matrices) stress recovery to automatically compute margins of safety is presented with an application to the Hubble space telescope.

  19. The application of artificial intelligent techniques to accelerator operations at McMaster University

    NASA Astrophysics Data System (ADS)

    Poehlman, W. F. S.; Garland, Wm. J.; Stark, J. W.

    1993-06-01

    In an era of downsizing and a limited pool of skilled accelerator personnel from which to draw replacements for an aging workforce, the impetus to integrate intelligent computer automation into the accelerator operator's repertoire is strong. However, successful deployment of an "Operator's Companion" is not trivial. Both graphical and human factors need to be recognized as critical areas that require extra care when formulating the Companion. They include interactive graphical user's interface that mimics, for the operator, familiar accelerator controls; knowledge of acquisition phases during development must acknowledge the expert's mental model of machine operation; and automated operations must be seen as improvements to the operator's environment rather than threats of ultimate replacement. Experiences with the PACES Accelerator Operator Companion developed at two sites over the past three years are related and graphical examples are given. The scale of the work involves multi-computer control of various start-up/shutdown and tuning procedures for Model FN and KN Van de Graaff accelerators. The response from licensing agencies has been encouraging.

  20. Utilizing GPUs to Accelerate Turbomachinery CFD Codes

    NASA Technical Reports Server (NTRS)

    MacCalla, Weylin; Kulkarni, Sameer

    2016-01-01

    GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.

  1. Strategic directions of computing at Fermilab

    NASA Astrophysics Data System (ADS)

    Wolbers, Stephen

    1998-05-01

    Fermilab computing has changed a great deal over the years, driven by the demands of the Fermilab experimental community to record and analyze larger and larger datasets, by the desire to take advantage of advances in computing hardware and software, and by the advances coming from the R&D efforts of the Fermilab Computing Division. The strategic directions of Fermilab Computing continue to be driven by the needs of the experimental program. The current fixed-target run will produce over 100 TBytes of raw data and systems must be in place to allow the timely analysis of the data. The collider run II, beginning in 1999, is projected to produce of order 1 PByte of data per year. There will be a major change in methodology and software language as the experiments move away from FORTRAN and into object-oriented languages. Increased use of automation and the reduction of operator-assisted tape mounts will be required to meet the needs of the large experiments and large data sets. Work will continue on higher-rate data acquisition systems for future experiments and projects. R&D projects will be pursued as necessary to provide software, tools, or systems which cannot be purchased or acquired elsewhere. A closer working relation with other high energy laboratories will be pursued to reduce duplication of effort and to allow effective collaboration on many aspects of HEP computing.

  2. Clarifying the "A" in CAI for Learners of Different Abilities. Assessing the Cognitive consequences of Computer Environments for Learning (ACCCEL).

    ERIC Educational Resources Information Center

    Mandinach, Ellen B.

    This study investigated the degree to which 48 seventh and eighth grade students of different abilities acquired strategic planning knowledge from an intellectual computer game ("Wumpus"). Relationships between ability and student performance with two versions of the game were also investigated. The two versions differed in the structure…

  3. KSC-99pp1227

    NASA Image and Video Library

    1999-10-06

    Children at Audubon Elementary School, Merritt Island, Fla., eagerly unwrap computer equipment donated by Kennedy Space Center. Audubon is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  4. GPU Accelerated Prognostics

    NASA Technical Reports Server (NTRS)

    Gorospe, George E., Jr.; Daigle, Matthew J.; Sankararaman, Shankar; Kulkarni, Chetan S.; Ng, Eley

    2017-01-01

    Prognostic methods enable operators and maintainers to predict the future performance for critical systems. However, these methods can be computationally expensive and may need to be performed each time new information about the system becomes available. In light of these computational requirements, we have investigated the application of graphics processing units (GPUs) as a computational platform for real-time prognostics. Recent advances in GPU technology have reduced cost and increased the computational capability of these highly parallel processing units, making them more attractive for the deployment of prognostic software. We present a survey of model-based prognostic algorithms with considerations for leveraging the parallel architecture of the GPU and a case study of GPU-accelerated battery prognostics with computational performance results.

  5. News | Computing

    Science.gov Websites

    Support News Publications Computing for Experiments Computing for Neutrino and Muon Physics Computing for Collider Experiments Computing for Astrophysics Research and Development Accelerator Modeling ComPASS - Impact of Detector Simulation on Particle Physics Collider Experiments Daniel Elvira's paper "Impact

  6. Computational Science and Innovation

    NASA Astrophysics Data System (ADS)

    Dean, D. J.

    2011-09-01

    Simulations - utilizing computers to solve complicated science and engineering problems - are a key ingredient of modern science. The U.S. Department of Energy (DOE) is a world leader in the development of high-performance computing (HPC), the development of applied math and algorithms that utilize the full potential of HPC platforms, and the application of computing to science and engineering problems. An interesting general question is whether the DOE can strategically utilize its capability in simulations to advance innovation more broadly. In this article, I will argue that this is certainly possible.

  7. GPU accelerated dynamic functional connectivity analysis for functional MRI data.

    PubMed

    Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu

    2015-07-01

    Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Ultrasound window-modulated compounding Nakagami imaging: Resolution improvement and computational acceleration for liver characterization.

    PubMed

    Ma, Hsiang-Yang; Lin, Ying-Hsiu; Wang, Chiao-Yin; Chen, Chiung-Nien; Ho, Ming-Chih; Tsui, Po-Hsiang

    2016-08-01

    Ultrasound Nakagami imaging is an attractive method for visualizing changes in envelope statistics. Window-modulated compounding (WMC) Nakagami imaging was reported to improve image smoothness. The sliding window technique is typically used for constructing ultrasound parametric and Nakagami images. Using a large window overlap ratio may improve the WMC Nakagami image resolution but reduces computational efficiency. Therefore, the objectives of this study include: (i) exploring the effects of the window overlap ratio on the resolution and smoothness of WMC Nakagami images; (ii) proposing a fast algorithm that is based on the convolution operator (FACO) to accelerate WMC Nakagami imaging. Computer simulations and preliminary clinical tests on liver fibrosis samples (n=48) were performed to validate the FACO-based WMC Nakagami imaging. The results demonstrated that the width of the autocorrelation function and the parameter distribution of the WMC Nakagami image reduce with the increase in the window overlap ratio. One-pixel shifting (i.e., sliding the window on the image data in steps of one pixel for parametric imaging) as the maximum overlap ratio significantly improves the WMC Nakagami image quality. Concurrently, the proposed FACO method combined with a computational platform that optimizes the matrix computation can accelerate WMC Nakagami imaging, allowing the detection of liver fibrosis-induced changes in envelope statistics. FACO-accelerated WMC Nakagami imaging is a new-generation Nakagami imaging technique with an improved image quality and fast computation. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Defense Department Cyber Efforts: More Detailed Guidance Needed to Ensure Military Services Develop Appropriate Cyberspace Capabilities

    DTIC Science & Technology

    2011-05-01

    communications and on computer networks—its Global Information Grid—which are potentially jeopardized by the millions of denial-of-service attacks, hacking ...Director,a National Security Agency Chief of Staff Joint Operations Center Defense Information Systems Agency Command Center J1 J2 J3 J4 J5 J6 J7 J8...DC Joint Staff • J39, Operations, Pentagon, Washington, DC • J5 , Strategic Plans and Policy, Pentagon, Washington, DC U.S. Strategic Command • J882

  10. Accelerator on a Chip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    England, Joel

    2014-06-30

    SLAC's Joel England explains how the same fabrication techniques used for silicon computer microchips allowed their team to create the new laser-driven particle accelerator chips. (SLAC Multimedia Communications)

  11. Accelerator on a Chip

    ScienceCinema

    England, Joel

    2018-01-16

    SLAC's Joel England explains how the same fabrication techniques used for silicon computer microchips allowed their team to create the new laser-driven particle accelerator chips. (SLAC Multimedia Communications)

  12. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  13. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  14. Extended Task Space Control for Robotic Manipulators

    NASA Technical Reports Server (NTRS)

    Backes, Paul G. (Inventor); Long, Mark K. (Inventor)

    1996-01-01

    The invention is a method of operating a robot in successive sampling intervals to perform a task, the robot having joints and joint actuators with actuator control loops, by decomposing the task into behavior forces, accelerations, velocities and positions of plural behaviors to be exhibited by the robot simultaneously, computing actuator accelerations of the joint actuators for the current sampling interval from both behavior forces, accelerations velocities and positions of the current sampling interval and actuator velocities and positions of the previous sampling interval, computing actuator velocities and positions of the joint actuators for the current sampling interval from the actuator velocities and positions of the previous sampling interval, and, finally, controlling the actuators in accordance with the actuator accelerations, velocities and positions of the current sampling interval. The actuator accelerations, velocities and positions of the current sampling interval are stored for use during the next sampling interval.

  15. Covariant Uniform Acceleration

    NASA Astrophysics Data System (ADS)

    Friedman, Yaakov; Scarr, Tzvi

    2013-04-01

    We derive a 4D covariant Relativistic Dynamics Equation. This equation canonically extends the 3D relativistic dynamics equation , where F is the 3D force and p = m0γv is the 3D relativistic momentum. The standard 4D equation is only partially covariant. To achieve full Lorentz covariance, we replace the four-force F by a rank 2 antisymmetric tensor acting on the four-velocity. By taking this tensor to be constant, we obtain a covariant definition of uniformly accelerated motion. This solves a problem of Einstein and Planck. We compute explicit solutions for uniformly accelerated motion. The solutions are divided into four Lorentz-invariant types: null, linear, rotational, and general. For null acceleration, the worldline is cubic in the time. Linear acceleration covariantly extends 1D hyperbolic motion, while rotational acceleration covariantly extends pure rotational motion. We use Generalized Fermi-Walker transport to construct a uniformly accelerated family of inertial frames which are instantaneously comoving to a uniformly accelerated observer. We explain the connection between our approach and that of Mashhoon. We show that our solutions of uniformly accelerated motion have constant acceleration in the comoving frame. Assuming the Weak Hypothesis of Locality, we obtain local spacetime transformations from a uniformly accelerated frame K' to an inertial frame K. The spacetime transformations between two uniformly accelerated frames with the same acceleration are Lorentz. We compute the metric at an arbitrary point of a uniformly accelerated frame. We obtain velocity and acceleration transformations from a uniformly accelerated system K' to an inertial frame K. We introduce the 4D velocity, an adaptation of Horwitz and Piron s notion of "off-shell." We derive the general formula for the time dilation between accelerated clocks. We obtain a formula for the angular velocity of a uniformly accelerated object. Every rest point of K' is uniformly accelerated, and its acceleration is a function of the observer's acceleration and its position. We obtain an interpretation of the Lorentz-Abraham-Dirac equation as an acceleration transformation from K' to K.

  16. Highly accelerated acquisition and homogeneous image reconstruction with rotating RF coil array at 7T-A phantom based study.

    PubMed

    Li, Mingyan; Zuo, Zhentao; Jin, Jin; Xue, Rong; Trakic, Adnan; Weber, Ewald; Liu, Feng; Crozier, Stuart

    2014-03-01

    Parallel imaging (PI) is widely used for imaging acceleration by means of coil spatial sensitivities associated with phased array coils (PACs). By employing a time-division multiplexing technique, a single-channel rotating radiofrequency coil (RRFC) provides an alternative method to reduce scan time. Strategically combining these two concepts could provide enhanced acceleration and efficiency. In this work, the imaging acceleration ability and homogeneous image reconstruction strategy of 4-element rotating radiofrequency coil array (RRFCA) was numerically investigated and experimental validated at 7T with a homogeneous phantom. Each coil of RRFCA was capable of acquiring a large number of sensitivity profiles, leading to a better acceleration performance illustrated by the improved geometry-maps that have lower maximum values and more uniform distributions compared to 4- and 8-element stationary arrays. A reconstruction algorithm, rotating SENSitivity Encoding (rotating SENSE), was proposed to provide image reconstruction. Additionally, by optimally choosing the angular sampling positions and transmit profiles under the rotating scheme, phantom images could be faithfully reconstructed. The results indicate that, the proposed technique is able to provide homogeneous reconstructions with overall higher and more uniform signal-to-noise ratio (SNR) distributions at high reduction factors. It is hoped that, by employing the high imaging acceleration and homogeneous imaging reconstruction ability of RRFCA, the proposed method will facilitate human imaging for ultra high field MRI. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Laboratory directed research and development fy1999 annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Ayat, R A

    2000-04-11

    The Lawrence Livermore National Laboratory (LLNL) was founded in 1952 and has been managed since its inception by the University of California (UC) for the U.S. Department of Energy (DOE). Because of this long association with UC, the Laboratory has been able to recruit a world-class workforce, establish an atmosphere of intellectual freedom and innovation, and achieve recognition in relevant fields of knowledge as a scientific and technological leader. This environment and reputation are essential for sustained scientific and technical excellence. As a DOE national laboratory with about 7,000 employees, LLNL has an essential and compelling primary mission to ensuremore » that the nation's nuclear weapons remain safe, secure, and reliable and to prevent the spread and use of nuclear weapons worldwide. The Laboratory receives funding from the DOE Assistant Secretary for Defense Programs, whose focus is stewardship of our nuclear weapons stockpile. Funding is also provided by the Deputy Administrator for Defense Nuclear Nonproliferation, many Department of Defense sponsors, other federal agencies, and the private sector. As a multidisciplinary laboratory, LLNL has applied its considerable skills in high-performance computing, advanced engineering, and the management of large research and development projects to become the science and technology leader in those areas of its mission responsibility. The Laboratory Directed Research and Development (LDRD) Program was authorized by the U.S. Congress in 1984. The Program allows the Director of each DOE laboratory to fund advanced, creative, and innovative research and development (R&D) activities that will ensure scientific and technical vitality in the continually evolving mission areas at DOE and the Laboratory. In addition, the LDRD Program provides LLNL with the flexibility to nurture and enrich essential scientific and technical competencies, which attract the most qualified scientists and engineers. The LDRD Program also enables many collaborations with the scientific community in academia, national and international laboratories, and industry. The projects in the FY1999 LDRD portfolio were carefully selected to continue vigorous support of the strategic vision and the long-term goals of DOE and the Laboratory. Projects chosen for LDRD funding undergo stringent selection processes, which look for high-potential scientific return, emphasize strategic relevance, and feature technical peer reviews by external and internal experts. The FY1999 projects described in this annual report focus on supporting the Laboratory's national security needs: stewardship of the U.S. nuclear weapons stockpile, responsibility for the counter- and nonproliferation of weapons of mass destruction, development of high-performance computing, and support of DOE environmental research and waste management programs. In the past, LDRD investments have significantly enhanced LLNL scientific capabilities and greatly contributed to the Laboratory's ability to meet its national security programmatic requirements. Examples of past investments include technical precursors to the Accelerated Strategic Computing Initiative (ASCI), special-materials processing and characterization, and biodefense. Our analysis of the FY1999 portfolio shows that it strongly supports the Laboratory's national security mission. About 95% of the LDRD dollars have directly supported LLNL's national security activities in FY1999, which far exceeds the portion of LLNL's overall budget supported by National Security Programs, which is 63% for FY1999.« less

  18. By Deploying Weapons in Space, Is the United States Opening a Theater of Engagement That Could Disadvantage the United States in the Long Term?

    DTIC Science & Technology

    2001-06-01

    totaled $3.48 million and included research into “power system materials, particle accelerators, platforms and theater defense architecture” (Strategic...Scowcroft, Nye, and Shear 1987, 10). In a minor conflict, destroying a multimillion -dollar satellite could increase tensions. Perry, Scowcroft, Nye and...Gabbard 1998, 40). The reprisal would not be performed because of a loss of a multimillion dollar satellite but to show will. “As the leaders in space power

  19. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., 'practice' using a computer keyboard, part of equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  20. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., look with curiosity at the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  1. Audubon Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Audubon Elementary School, Merritt Island, Fla., eagerly unwrap computer equipment donated by Kennedy Space Center. Audubon is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year- long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  2. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., eagerly tear into the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year- long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  3. Coquina Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Coquina Elementary School, Titusville, Fla., excitedly tear into the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  4. Hospital positioning: a strategic tool for the 1990s.

    PubMed

    San Augustine, A J; Long, W J; Pantzallis, J

    1992-03-01

    The authors extend the process of market positioning in the health care sector by focusing on the simultaneous utilization of traditional research methods and emerging new computer-based adaptive perceptual mapping technologies and techniques.

  5. GPU acceleration of Dock6's Amber scoring computation.

    PubMed

    Yang, Hailong; Zhou, Qiongqiong; Li, Bo; Wang, Yongjian; Luan, Zhongzhi; Qian, Depei; Li, Hanlu

    2010-01-01

    Dressing the problem of virtual screening is a long-term goal in the drug discovery field, which if properly solved, can significantly shorten new drugs' R&D cycle. The scoring functionality that evaluates the fitness of the docking result is one of the major challenges in virtual screening. In general, scoring functionality in docking requires a large amount of floating-point calculations, which usually takes several weeks or even months to be finished. This time-consuming procedure is unacceptable, especially when highly fatal and infectious virus arises such as SARS and H1N1, which forces the scoring task to be done in a limited time. This paper presents how to leverage the computational power of GPU to accelerate Dock6's (http://dock.compbio.ucsf.edu/DOCK_6/) Amber (J. Comput. Chem. 25: 1157-1174, 2004) scoring with NVIDIA CUDA (NVIDIA Corporation Technical Staff, Compute Unified Device Architecture - Programming Guide, NVIDIA Corporation, 2008) (Compute Unified Device Architecture) platform. We also discuss many factors that will greatly influence the performance after porting the Amber scoring to GPU, including thread management, data transfer, and divergence hidden. Our experiments show that the GPU-accelerated Amber scoring achieves a 6.5× speedup with respect to the original version running on AMD dual-core CPU for the same problem size. This acceleration makes the Amber scoring more competitive and efficient for large-scale virtual screening problems.

  6. Bio-steps beyond Turing.

    PubMed

    Calude, Cristian S; Păun, Gheorghe

    2004-11-01

    Are there 'biologically computing agents' capable to compute Turing uncomputable functions? It is perhaps tempting to dismiss this question with a negative answer. Quite the opposite, for the first time in the literature on molecular computing we contend that the answer is not theoretically negative. Our results will be formulated in the language of membrane computing (P systems). Some mathematical results presented here are interesting in themselves. In contrast with most speed-up methods which are based on non-determinism, our results rest upon some universality results proved for deterministic P systems. These results will be used for building "accelerated P systems". In contrast with the case of Turing machines, acceleration is a part of the hardware (not a quality of the environment) and it is realised either by decreasing the size of "reactors" or by speeding-up the communication channels. Consequently, two acceleration postulates of biological inspiration are introduced; each of them poses specific questions to biology. Finally, in a more speculative part of the paper, we will deal with Turing non-computability activity of the brain and possible forms of (extraterrestrial) intelligence.

  7. Acceleration of FDTD mode solver by high-performance computing techniques.

    PubMed

    Han, Lin; Xi, Yanping; Huang, Wei-Ping

    2010-06-21

    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  8. Angular Impact Mitigation System for Bicycle Helmets to Reduce Head Acceleration and Risk of Traumatic Brain Injury

    PubMed Central

    Hansen, Kirk; Dau, Nathan; Feist, Florian; Deck, Caroline; Willinger, Rémy; Madey, Steven M.; Bottlang, Michael

    2013-01-01

    Angular acceleration of the head is a known cause of traumatic brain injury (TBI), but contemporary bicycle helmets lack dedicated mechanisms to mitigate angular acceleration. A novel Angular Impact Mitigation (AIM) system for bicycle helmets has been developed that employs an elastically suspended aluminum honeycomb liner to absorb linear acceleration in normal impacts as well as angular acceleration in oblique impacts. This study tested bicycle helmets with and without AIM technology to comparatively assess impact mitigation. Normal impact tests were performed to measure linear head acceleration. Oblique impact tests were performed to measure angular head acceleration and neck loading. Furthermore, acceleration histories of oblique impacts were analyzed in a computational head model to predict the resulting risk of TBI in the form of concussion and diffuse axonal injury (DAI). Compared to standard helmets, AIM helmets resulted in a 14% reduction in peak linear acceleration (p < 0.001), a 34% reduction in peak angular acceleration (p < 0.001), and a 22% to 32% reduction in neck loading (p < 0.001). Computational results predicted that AIM helmets reduced the risk of concussion and DAI by 27% and 44%, respectively. In conclusion, these results demonstrated that AIM technology could effectively improve impact mitigation compared to a contemporary expanded polystyrene-based bicycle helmet, and may enhance prevention of bicycle-related TBI. Further research is required. PMID:23770518

  9. Accelerated Reader. What Works Clearinghouse Intervention Report

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2009

    2009-01-01

    "Accelerated Reader" is a computer-based reading management system designed to complement an existing classroom literacy program for grades pre-K-12. It is designed to increase the amount of time students spend reading independently. Students choose reading-level appropriate books or short stories for which Accelerated Reader tests are…

  10. Computer generated hologram from point cloud using graphics processor.

    PubMed

    Chen, Rick H-Y; Wilkinson, Timothy D

    2009-12-20

    Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique.

  11. Quantum computational complexity, Einstein's equations and accelerated expansion of the Universe

    NASA Astrophysics Data System (ADS)

    Ge, Xian-Hui; Wang, Bin

    2018-02-01

    We study the relation between quantum computational complexity and general relativity. The quantum computational complexity is proposed to be quantified by the shortest length of geodesic quantum curves. We examine the complexity/volume duality in a geodesic causal ball in the framework of Fermi normal coordinates and derive the full non-linear Einstein equation. Using insights from the complexity/action duality, we argue that the accelerated expansion of the universe could be driven by the quantum complexity and free from coincidence and fine-tunning problems.

  12. Vaccine stability study design and analysis to support product licensure.

    PubMed

    Schofield, Timothy L

    2009-11-01

    Stability evaluation supporting vaccine licensure includes studies of bulk intermediates as well as final container product. Long-term and accelerated studies are performed to support shelf life and to determine release limits for the vaccine. Vaccine shelf life is best determined utilizing a formal statistical evaluation outlined in the ICH guidelines, while minimum release is calculated to help assure adequate potency through handling and storage of the vaccine. In addition to supporting release potency determination, accelerated stability studies may be used to support a strategy to recalculate product expiry after an unintended temperature excursion such as a cold storage unit failure or mishandling during transport. Appropriate statistical evaluation of vaccine stability data promotes strategic stability study design, in order to reduce the uncertainty associated with the determination of the degradation rate, and the associated risk to the customer.

  13. Status Report on the Development of Research Campaigns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baer, Donald R.; Baker, Scott E.; Washton, Nancy M.

    2013-06-30

    Research campaigns were conceived as a means to focus EMSL research on specific scientific questions. Campaign will help fulfill the Environmental Molecular Sciences Laboratory (EMSL) strategic vision to develop and integrate, for use by the scientific community, world leading capabilities that transform understanding in the environmental molecular sciences and accelerate discoveries relevant to the Department of Energy’s (DOE’s) missions. Campaigns are multi-institutional multi-disciplinary projects with scope beyond those of normal EMSL user projects. The goal of research campaigns is to have EMSL scientists and users team on the projects in the effort to accelerate progress and increase impact in specificmore » scientific areas by focusing user research, EMSL resources, and expertise in those areas. This report will give a history and update on the progress of those campaigns.« less

  14. Final safety analysis report for the Ground Test Accelerator (GTA), Phase 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-10-01

    This document is the first volume of a 3 volume safety analysis report on the Ground Test Accelerator (GTA). The GTA program at the Los Alamos National Laboratory (LANL) is the major element of the national Neutral Particle Beam (NPB) program, which is supported by the Strategic Defense Initiative Office (SDIO). A principal goal of the national NPB program is to assess the feasibility of using hydrogen and deuterium neutral particle beams outside the Earth`s atmosphere. The main effort of the NPB program at Los Alamos concentrates on developing the GTA. The GTA is classified as a low-hazard facility, exceptmore » for the cryogenic-cooling system, which is classified as a moderate-hazard facility. This volume consists of an introduction, summary/conclusion, site description and assessment, description of facility, and description of operation.« less

  15. Fast vaccine design and development based on correlates of protection (COPs)

    PubMed Central

    van Els, Cécile; Mjaaland, Siri; Næss, Lisbeth; Sarkadi, Julia; Gonczol, Eva; Smith Korsholm, Karen; Hansen, Jon; de Jonge, Jørgen; Kersten, Gideon; Warner, Jennifer; Semper, Amanda; Kruiswijk, Corine; Oftung, Fredrik

    2014-01-01

    New and reemerging infectious diseases call for innovative and efficient control strategies of which fast vaccine design and development represent an important element. In emergency situations, when time is limited, identification and use of correlates of protection (COPs) may play a key role as a strategic tool for accelerated vaccine design, testing, and licensure. We propose that general rules for COP-based vaccine design can be extracted from the existing knowledge of protective immune responses against a large spectrum of relevant viral and bacterial pathogens. Herein, we focus on the applicability of this approach by reviewing the established and up-coming COPs for influenza in the context of traditional and a wide array of new vaccine concepts. The lessons learnt from this field may be applied more generally to COP-based accelerated vaccine design for emerging infections. PMID:25424803

  16. Anderson Acceleration for Fixed-Point Iterations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Homer F.

    The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.

  17. Choosing order of operations to accelerate strip structure analysis in parameter range

    NASA Astrophysics Data System (ADS)

    Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.

    2018-05-01

    The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.

  18. Computing Game-Theoretic Solutions for Security in the Medium Term

    DTIC Science & Technology

    This project concerns the design of algorithms for computing game- theoretic solutions . (Game theory concerns how to act in a strategically optimal...way in environments with other agents who also seek to act optimally but have different , and possibly opposite, interests .) Such algorithms have...recently found application in a number of real-world security applications, including among others airport security, scheduling Federal Air Marshals, and

  19. Computational Infrastructure for Geodynamics (CIG)

    NASA Astrophysics Data System (ADS)

    Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.

    2004-12-01

    Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to leverage and develop long-term strategic partnerships with open source development efforts within the larger thrusts of scientific computing and geoinformatics. These strategic partnerships are essential as the frontier has moved into multi-scale and multi-physics problems in which many investigators now want to use simulation software for data interpretation, data assimilation, and hypothesis testing.

  20. GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy

    NASA Astrophysics Data System (ADS)

    Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro

    2011-03-01

    The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.

  1. Quantum supercharger library: hyper-parallelism of the Hartree-Fock method.

    PubMed

    Fernandes, Kyle D; Renison, C Alicia; Naidoo, Kevin J

    2015-07-05

    We present here a set of algorithms that completely rewrites the Hartree-Fock (HF) computations common to many legacy electronic structure packages (such as GAMESS-US, GAMESS-UK, and NWChem) into a massively parallel compute scheme that takes advantage of hardware accelerators such as Graphical Processing Units (GPUs). The HF compute algorithm is core to a library of routines that we name the Quantum Supercharger Library (QSL). We briefly evaluate the QSL's performance and report that it accelerates a HF 6-31G Self-Consistent Field (SCF) computation by up to 20 times for medium sized molecules (such as a buckyball) when compared with mature Central Processing Unit algorithms available in the legacy codes in regular use by researchers. It achieves this acceleration by massive parallelization of the one- and two-electron integrals and optimization of the SCF and Direct Inversion in the Iterative Subspace routines through the use of GPU linear algebra libraries. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  2. Exploring the Learning from an Enterprise Simulation.

    ERIC Educational Resources Information Center

    Sawyer, John E.; Gopinath, C.

    1999-01-01

    A computer simulation used in teams by 151 business students tested their ability to translate strategy into decisions. Over eight weeks, the experiential learning activity encouraged strategic decision making and group behavior consistent with long-term strategy. (SK)

  3. Strategic Vision for Adopting 21st Century Science Methodologies

    EPA Pesticide Factsheets

    To better protect human health and the environment, EPA’s OPP is developing and evaluating new technologies in molecular, cellular, computational sciences to supplement or replace more traditional methods of toxicity testing and risk assessment.

  4. A "Star Wars" Objector Lays His Research on the Line.

    ERIC Educational Resources Information Center

    Tobias, Sheila

    1987-01-01

    For one optical scientist, Harrison Barrett, the decision not to accept funding for research related to the Strategic Defense Initiative has meant giving up a major part of his work in optical computing. (MSE)

  5. Plenary.

    ERIC Educational Resources Information Center

    Oettinger, Anthony G.

    2000-01-01

    Describes the Harvard Program on Information Resources Policy (PIRP) that studies how public policy and strategic corporate decisions affect information systems, including computer technologies; postal and mechanical transportation systems; information use by civilian and military organizations; effect of new technologies; international politics;…

  6. Biomaterials and computation: a strategic alliance to investigate emergent responses of neural cells.

    PubMed

    Sergi, Pier Nicola; Cavalcanti-Adam, Elisabetta Ada

    2017-03-28

    Topographical and chemical cues drive migration, outgrowth and regeneration of neurons in different and crucial biological conditions. In the natural extracellular matrix, their influences are so closely coupled that they result in complex cellular responses. As a consequence, engineered biomaterials are widely used to simplify in vitro conditions, disentangling intricate in vivo behaviours, and narrowing the investigation on particular emergent responses. Nevertheless, how topographical and chemical cues affect the emergent response of neural cells is still unclear, thus in silico models are used as additional tools to reproduce and investigate the interactions between cells and engineered biomaterials. This work aims at presenting the synergistic use of biomaterials-based experiments and computation as a strategic way to promote the discovering of complex neural responses as well as to allow the interactions between cells and biomaterials to be quantitatively investigated, fostering a rational design of experiments.

  7. Computer modeling of photodegradation

    NASA Technical Reports Server (NTRS)

    Guillet, J.

    1986-01-01

    A computer program to simulate the photodegradation of materials exposed to terrestrial weathering environments is being developed. Input parameters would include the solar spectrum, the daily levels and variations of temperature and relative humidity, and materials such as EVA. A brief description of the program, its operating principles, and how it works was initially described. After that, the presentation focuses on the recent work of simulating aging in a normal, terrestrial day-night cycle. This is significant, as almost all accelerated aging schemes maintain a constant light illumination without a dark cycle, and this may be a critical factor not included in acceleration aging schemes. For outdoor aging, the computer model is indicating that the night dark cycle has a dramatic influence on the chemistry of photothermal degradation, and hints that a dark cycle may be needed in an accelerated aging scheme.

  8. Research | Computational Science | NREL

    Science.gov Websites

    Research Research NREL's computational science experts use advanced high-performance computing (HPC technologies, thereby accelerating the transformation of our nation's energy system. Enabling High-Impact Research NREL's computational science capabilities enable high-impact research. Some recent examples

  9. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    PubMed

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  10. Strategic governance: Addressing neonatal mortality in situations of political instability and weak governance.

    PubMed

    Wise, Paul H; Darmstadt, Gary L

    2015-08-01

    Neonatal mortality is increasingly concentrated globally in situations of conflict and political instability. In 1991, countries with high levels of political instability accounted for approximately 10% of all neonatal deaths worldwide; in 2013, this figure had grown to 31%. This has generated a "grand divergence" between those countries showing progress in neonatal mortality reduction compared to those lagging behind. We present new analyses demonstrating associations of neonatal mortality with political instability (r = 0.55) and poor governance (r = 0.70). However, heterogeneity in these relationships suggests that progress is possible in addressing neonatal mortality even in the midst of political instability and poor governance. In order to address neonatal mortality more effectively in such situations, we must better understand how specific elements of "strategic governance"--the minimal conditions of political stability and governance required for health service implementation--can be leveraged for successful introduction of specific health services. Thus, a more strategic approach to policy and program implementation in situations of conflict and political instability could lead to major accelerations in neonatal mortality reduction globally. However, this will require new cross-disciplinary collaborations among public health professionals, political scientists, and country actors. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  11. An "Elective Replacement" Approach to Providing Extra Help in Math: The Talent Development Middle Schools' Computer- and Team-Assisted Mathematics Acceleration (CATAMA) Program.

    ERIC Educational Resources Information Center

    Mac Iver, Douglas J.; Balfanz, Robert; Plank, Stephan B.

    1999-01-01

    Two studies evaluated the Computer- and Team-Assisted Mathematics Acceleration course (CATAMA) in Talent Development Middle Schools. The first study compared growth in math achievement for 96 seventh-graders (48 of whom participated in CATAMA and 48 of whom did not); the second study gathered data from interviews with, and observations of, CATAMA…

  12. Implementing Molecular Dynamics on Hybrid High Performance Computers - Particle-Particle Particle-Mesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J

    The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less

  13. Ion acceleration in a plasma focus

    NASA Technical Reports Server (NTRS)

    Gary, S. P.

    1974-01-01

    The electric and magnetic fields associated with anomalous diffusion to the axis of a linear plasma discharge are used to compute representative ion trajectories. Substantial axial acceleration of the ions is demonstrated.

  14. Accelerating 3D Elastic Wave Equations on Knights Landing based Intel Xeon Phi processors

    NASA Astrophysics Data System (ADS)

    Sourouri, Mohammed; Birger Raknes, Espen

    2017-04-01

    In advanced imaging methods like reverse-time migration (RTM) and full waveform inversion (FWI) the elastic wave equation (EWE) is numerically solved many times to create the seismic image or the elastic parameter model update. Thus, it is essential to optimize the solution time for solving the EWE as this will have a major impact on the total computational cost in running RTM or FWI. From a computational point of view applications implementing EWEs are associated with two major challenges. The first challenge is the amount of memory-bound computations involved, while the second challenge is the execution of such computations over very large datasets. So far, multi-core processors have not been able to tackle these two challenges, which eventually led to the adoption of accelerators such as Graphics Processing Units (GPUs). Compared to conventional CPUs, GPUs are densely populated with many floating-point units and fast memory, a type of architecture that has proven to map well to many scientific computations. Despite its architectural advantages, full-scale adoption of accelerators has yet to materialize. First, accelerators require a significant programming effort imposed by programming models such as CUDA or OpenCL. Second, accelerators come with a limited amount of memory, which also require explicit data transfers between the CPU and the accelerator over the slow PCI bus. The second generation of the Xeon Phi processor based on the Knights Landing (KNL) architecture, promises the computational capabilities of an accelerator but require the same programming effort as traditional multi-core processors. The high computational performance is realized through many integrated cores (number of cores and tiles and memory varies with the model) organized in tiles that are connected via a 2D mesh based interconnect. In contrary to accelerators, KNL is a self-hosted system, meaning explicit data transfers over the PCI bus are no longer required. However, like most accelerators, KNL sports a memory subsystem consisting of low-level caches and 16GB of high-bandwidth MCDRAM memory. For capacity computing, up to 400GB of conventional DDR4 memory is provided. Such a strict hierarchical memory layout means that data locality is imperative if the true potential of this product is to be harnessed. In this work, we study a series of optimizations specifically targeting KNL for our EWE based application to reduce the time-to-solution time for the following 3D model sizes in grid points: 1283, 2563 and 5123. We compare the results with an optimized version for multi-core CPUs running on a dual-socket Xeon E5 2680v3 system using OpenMP. Our initial naive implementation on the KNL is roughly 20% faster than the multi-core version, but by using only one thread per core and careful memory placement using the memkind library, we could achieve higher speedups. Additionally, by using the MCDRAM as cache for problem sizes that are smaller than 16 GB further performance improvements were unlocked. Depending on the problem size, our overall results indicate that the KNL based system is approximately 2.2x faster than the 24-core Xeon E5 2680v3 system, with only modest changes to the code.

  15. Strategic cognitive sequencing: a computational cognitive neuroscience approach.

    PubMed

    Herd, Seth A; Krueger, Kai A; Kriete, Trenton E; Huang, Tsung-Ren; Hazy, Thomas E; O'Reilly, Randall C

    2013-01-01

    We address strategic cognitive sequencing, the "outer loop" of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC) and basal ganglia (BG) cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or "self-instruction"). The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a "bridging" state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area.

  16. Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael

    2011-09-06

    We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.

  17. QALMA: A computational toolkit for the analysis of quality protocols for medical linear accelerators in radiation therapy

    NASA Astrophysics Data System (ADS)

    Rahman, Md Mushfiqur; Lei, Yu; Kalantzis, Georgios

    2018-01-01

    Quality Assurance (QA) for medical linear accelerator (linac) is one of the primary concerns in external beam radiation Therapy. Continued advancements in clinical accelerators and computer control technology make the QA procedures more complex and time consuming which often, adequate software accompanied with specific phantoms is required. To ameliorate that matter, we introduce QALMA (Quality Assurance for Linac with MATLAB), a MALAB toolkit which aims to simplify the quantitative analysis of QA for linac which includes Star-Shot analysis, Picket Fence test, Winston-Lutz test, Multileaf Collimator (MLC) log file analysis and verification of light & radiation field coincidence test.

  18. KSC-99pp1225

    NASA Image and Video Library

    1999-10-06

    Children at Coquina Elementary School, Titusville, Fla., excitedly tear into the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  19. KSC-99pp1224

    NASA Image and Video Library

    1999-10-06

    Children at Coquina Elementary School, Titusville, Fla., eagerly tear into the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  20. KSC-99pp1222

    NASA Image and Video Library

    1999-10-06

    Children at Coquina Elementary School, Titusville, Fla., look with curiosity at the wrapped computer equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  1. KSC-99pp1223

    NASA Image and Video Library

    1999-10-06

    Children at Coquina Elementary School, Titusville, Fla., "practice" using a computer keyboard, part of equipment donated by Kennedy Space Center. Coquina is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  2. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of themore » physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.« less

  3. Computational screening of organic polymer dielectrics for novel accelerator technologies

    DOE PAGES

    Pilania, Ghanshyam; Weis, Eric; Walker, Ethan M.; ...

    2018-06-18

    The use of infrared lasers to power accelerating dielectric structures is a developing area of research. Within this technology, the choice of the dielectric material forming the accelerating structures, such as the photonic band gap (PBG) structures, is dictated by a range of interrelated factors including their dielectric and optical properties, amenability to photo-polymerization, thermochemical stability and other target performance metrics of the particle accelerator. In this direction, electronic structure theory aided computational screening and design of dielectric materials can play a key role in identifying potential candidate materials with the targeted functionalities to guide experimental synthetic efforts. In anmore » attempt to systematically understand the role of chemistry in controlling the electronic structure and dielectric properties of organic polymeric materials, here we employ empirical screening and density functional theory (DFT) computations, as a part of our multi-step hierarchal screening strategy. Our DFT based analysis focused on the bandgap, dielectric permittivity, and frequency-dependent dielectric losses due to lattice absorption as key properties to down-select promising polymer motifs. In addition to the specific application of dielectric laser acceleration, the general methodology presented here is deemed to be valuable in the design of new insulators with an attractive combination of dielectric properties.« less

  4. RIACS

    NASA Technical Reports Server (NTRS)

    Moore, Robert C.

    1998-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. Research is carried out by a staff of full-time scientist,augmented by visitors, students, post doctoral candidates and visiting university faculty. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: Automated Reasoning. Human-Centered Computing. and High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling.

  5. An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Randal Scott

    CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest andmore » emerging HPC systems.« less

  6. Using electronic patient records to inform strategic decision making in primary care.

    PubMed

    Mitchell, Elizabeth; Sullivan, Frank; Watt, Graham; Grimshaw, Jeremy M; Donnan, Peter T

    2004-01-01

    Although absolute risk of death associated with raised blood pressure increases with age, the benefits of treatment are greater in elderly patients. Despite this, the 'rule of halves' particularly applies to this group. We conducted a randomised controlled trial to evaluate different levels of feedback designed to improve identification, treatment and control of elderly hypertensives. Fifty-two general practices were randomly allocated to either: Control (n=19), Audit only feedback (n=16) or Audit plus Strategic feedback, prioritising patients by absolute risk (n=17). Feedback was based on electronic data, annually extracted from practice computer systems. Data were collected for 265,572 patients, 30,345 aged 65-79. The proportion of known hypertensives in each group with BP recorded increased over the study period and the numbers of untreated and uncontrolled patients reduced. There was a significant difference in mean systolic pressure between the Audit plus Strategic and Audit only groups and significantly greater control in the Audit plus Strategic group. Providing patient-specific practice feedback can impact on identification and management of hypertension in the elderly and produce a significant increase in control.

  7. Modeling the Value of Strategic Actions in the Superior Colliculus

    PubMed Central

    Thevarajah, Dhushan; Webb, Ryan; Ferrall, Christopher; Dorris, Michael C.

    2009-01-01

    In learning models of strategic game play, an agent constructs a valuation (action value) over possible future choices as a function of past actions and rewards. Choices are then stochastic functions of these action values. Our goal is to uncover a neural signal that correlates with the action value posited by behavioral learning models. We measured activity from neurons in the superior colliculus (SC), a midbrain region involved in planning saccadic eye movements, while monkeys performed two saccade tasks. In the strategic task, monkeys competed against a computer in a saccade version of the mixed-strategy game ”matching-pennies”. In the instructed task, saccades were elicited through explicit instruction rather than free choices. In both tasks neuronal activity and behavior were shaped by past actions and rewards with more recent events exerting a larger influence. Further, SC activity predicted upcoming choices during the strategic task and upcoming reaction times during the instructed task. Finally, we found that neuronal activity in both tasks correlated with an established learning model, the Experience Weighted Attraction model of action valuation (Camerer and Ho, 1999). Collectively, our results provide evidence that action values hypothesized by learning models are represented in the motor planning regions of the brain in a manner that could be used to select strategic actions. PMID:20161807

  8. GPU-accelerated computation of electron transfer.

    PubMed

    Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco

    2012-11-05

    Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.

  9. Campus Network Strategies: A Small College Perspective.

    ERIC Educational Resources Information Center

    Moberg, Thomas

    1999-01-01

    Offers advice to administrators and faculty in small colleges on planning, building, and managing campus computer networks. Also included are observations about the network as a strategic asset, funding and staffing issues, and planning for unexpected results. (Author/MSE)

  10. Corporate Perspective: An Interview with John Sculley.

    ERIC Educational Resources Information Center

    Temares, M. Lewis

    1989-01-01

    John Sculley, the chairman of the board of Apple Computer, Inc., discusses information technology management, management strategies, network management, the Chief Information Officer, strategic planning, back-to-the-future planning, business and university joint ventures, and security issues. (MLW)

  11. Sandia National Laboratories: Research: Materials Science

    Science.gov Websites

    Technology Partnerships Business, Industry, & Non-Profits Government Universities Center for Development Agreement (CRADA) Strategic Partnership Projects, Non-Federal Entity (SPP/NFE) Agreements New research. Research Our research uses Sandia's experimental, theoretical, and computational capabilities to

  12. Strategic deployment plan : intelligent transportation system (ITS) : early deployment study, Kansas City metropolitan bi-state area

    DOT National Transportation Integrated Search

    1997-01-01

    Intelligent transportation systems (ITS) are systems that utilize advanced technologies, including computer, communications and process control technologies, to improve the efficiency and safety of the transportation system. These systems encompass a...

  13. Accelerating Innovation: How Nuclear Physics Benefits Us All

    DOE R&D Accomplishments Database

    2011-01-01

    Innovation has been accelerated by nuclear physics in the areas of improving our health; making the world safer; electricity, environment, archaeology; better computers; contributions to industry; and training the next generation of innovators.

  14. Beam breakup in an advanced linear induction accelerator

    DOE PAGES

    Ekdahl, Carl August; Coleman, Joshua Eugene; McCuistian, Brian Trent

    2016-07-01

    Two linear induction accelerators (LIAs) have been in operation for a number of years at the Los Alamos Dual Axis Radiographic Hydrodynamic Test (DARHT) facility. A new multipulse LIA is being developed. We have computationally investigated the beam breakup (BBU) instability in this advanced LIA. In particular, we have explored the consequences of the choice of beam injector energy and the grouping of LIA cells. We find that within the limited range of options presently under consideration for the LIA architecture, there is little adverse effect on the BBU growth. The computational tool that we used for this investigation wasmore » the beam dynamics code linear accelerator model for DARHT (LAMDA). In conclusion, to confirm that LAMDA was appropriate for this task, we first validated it through comparisons with the experimental BBU data acquired on the DARHT accelerators.« less

  15. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    NASA Astrophysics Data System (ADS)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  16. Utilizing Android and the Cloud Computing Environment to Increase Situational Awareness for a Mobile Distributed Response

    DTIC Science & Technology

    2012-03-01

    by using a common communication technology there is no need to develop a complicated communications plan and generate an ad - hoc communications...DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) Maintaining an accurate Common Operational Picture (COP) is a strategic requirement for...TERMS Android Programming, Cloud Computing, Common Operating Picture, Web Programing 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT

  17. Fracture and Failure at and Near Interfaces Under Pressure

    DTIC Science & Technology

    1998-06-18

    realistic data for comparison with improved analytical results, and to 2) initiate a new computational approach for stress analysis of cracks at and near...new computational approach for stress analysis of cracks in solid propellants at and near interfaces, which analysis can draw on the ever expanding...tactical and strategic missile systems. The most important and most difficult component of the system analysis has been the predictability or

  18. Rim inertial measuring system

    NASA Technical Reports Server (NTRS)

    Groom, N. J.; Anderson, W. W.; Phillips, W. H. (Inventor)

    1981-01-01

    The invention includes an angular momentum control device (AMCD) having a rim and several magnetic bearing stations. The AMCD is in a strapped down position on a spacecraft. Each magnetic bearing station comprises means, including an axial position sensor, for controlling the position of the rim in the axial direction; and means, including a radial position sensor, for controlling the position of the rim in the radial direction. A first computer receives the signals from all the axial position sensors and computes the angular rates about first and second mutually perpendicular axes in the plane of the rim and computes the linear acceleration along a third axis perpendicular to the first and second axes. A second computer receives the signals from all the radial position sensors and computes the linear accelerations along the first and second axes.

  19. Acceleration of fluoro-CT reconstruction for a mobile C-Arm on GPU and FPGA hardware: a simulation study

    NASA Astrophysics Data System (ADS)

    Xue, Xinwei; Cheryauka, Arvi; Tubbs, David

    2006-03-01

    CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.

  20. Ice-sheet modelling accelerated by graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek

    2014-11-01

    Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.

  1. A parallel method of atmospheric correction for multispectral high spatial resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhao, Shaoshuai; Ni, Chen; Cao, Jing; Li, Zhengqiang; Chen, Xingfeng; Ma, Yan; Yang, Leiku; Hou, Weizhen; Qie, Lili; Ge, Bangyu; Liu, Li; Xing, Jin

    2018-03-01

    The remote sensing image is usually polluted by atmosphere components especially like aerosol particles. For the quantitative remote sensing applications, the radiative transfer model based atmospheric correction is used to get the reflectance with decoupling the atmosphere and surface by consuming a long computational time. The parallel computing is a solution method for the temporal acceleration. The parallel strategy which uses multi-CPU to work simultaneously is designed to do atmospheric correction for a multispectral remote sensing image. The parallel framework's flow and the main parallel body of atmospheric correction are described. Then, the multispectral remote sensing image of the Chinese Gaofen-2 satellite is used to test the acceleration efficiency. When the CPU number is increasing from 1 to 8, the computational speed is also increasing. The biggest acceleration rate is 6.5. Under the 8 CPU working mode, the whole image atmospheric correction costs 4 minutes.

  2. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  3. Accelerating Time Integration for the Shallow Water Equations on the Sphere Using GPUs

    DOE PAGES

    Archibald, R.; Evans, K. J.; Salinger, A.

    2015-06-01

    The push towards larger and larger computational platforms has made it possible for climate simulations to resolve climate dynamics across multiple spatial and temporal scales. This direction in climate simulation has created a strong need to develop scalable timestepping methods capable of accelerating throughput on high performance computing. This study details the recent advances in the implementation of implicit time stepping of the spectral element dynamical core within the United States Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) on graphical processing units (GPU) based machines. We demonstrate how solvers in the Trilinos project are interfaced with ACMEmore » and GPU kernels to increase computational speed of the residual calculations in the implicit time stepping method for the atmosphere dynamics. We demonstrate the optimization gains and data structure reorganization that facilitates the performance improvements.« less

  4. A pervasive parallel framework for visualization: final report for FWP 10-014707

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.

    2014-01-01

    We are on the threshold of a transformative change in the basic architecture of highperformance computing. The use of accelerator processors, characterized by large core counts, shared but asymmetrical memory, and heavy thread loading, is quickly becoming the norm in high performance computing. These accelerators represent significant challenges in updating our existing base of software. An intrinsic problem with this transition is a fundamental programming shift from message passing processes to much more fine thread scheduling with memory sharing. Another problem is the lack of stability in accelerator implementation; processor and compiler technology is currently changing rapidly. This report documentsmore » the results of our three-year ASCR project to address these challenges. Our project includes the development of the Dax toolkit, which contains the beginnings of new algorithms for a new generation of computers and the underlying infrastructure to rapidly prototype and build further algorithms as necessary.« less

  5. More IMPATIENT: A Gridding-Accelerated Toeplitz-based Strategy for Non-Cartesian High-Resolution 3D MRI on GPUs

    PubMed Central

    Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.

    2013-01-01

    Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203

  6. A wireless breathing-training support system for kinesitherapy.

    PubMed

    Tawa, Hiroki; Yonezawa, Yoshiharu; Maki, Hiromichi; Ogawa, Hidekuni; Ninomiya, Ishio; Sada, Kouji; Hamada, Shingo; Caldwell, W Morton

    2009-01-01

    We have developed a new wireless breathing-training support system for kinesitherapy. The system consists of an optical sensor, an accelerometer, a microcontroller, a Bluetooth module and a laptop computer. The optical sensor, which is attached to the patient's chest, measures chest circumference. The low frequency components of circumference are mainly generated by breathing. The optical sensor outputs the circumference as serial digital data. The accelerometer measures the dynamic acceleration force produced by exercise, such as walking. The microcontroller sequentially samples this force. The acceleration force and chest circumference are sent sequentially via Bluetooth to a physical therapist's laptop computer, which receives and stores the data. The computer simultaneously displays these data so that the physical therapist can monitor the patient's breathing and acceleration waveforms and give instructions to the patient in real time during exercise. Moreover, the system enables a quantitative training evaluation and calculation the volume of air inspired and expired by the lungs.

  7. High performance transcription factor-DNA docking with GPU computing

    PubMed Central

    2012-01-01

    Background Protein-DNA docking is a very challenging problem in structural bioinformatics and has important implications in a number of applications, such as structure-based prediction of transcription factor binding sites and rational drug design. Protein-DNA docking is very computational demanding due to the high cost of energy calculation and the statistical nature of conformational sampling algorithms. More importantly, experiments show that the docking quality depends on the coverage of the conformational sampling space. It is therefore desirable to accelerate the computation of the docking algorithm, not only to reduce computing time, but also to improve docking quality. Methods In an attempt to accelerate the sampling process and to improve the docking performance, we developed a graphics processing unit (GPU)-based protein-DNA docking algorithm. The algorithm employs a potential-based energy function to describe the binding affinity of a protein-DNA pair, and integrates Monte-Carlo simulation and a simulated annealing method to search through the conformational space. Algorithmic techniques were developed to improve the computation efficiency and scalability on GPU-based high performance computing systems. Results The effectiveness of our approach is tested on a non-redundant set of 75 TF-DNA complexes and a newly developed TF-DNA docking benchmark. We demonstrated that the GPU-based docking algorithm can significantly accelerate the simulation process and thereby improving the chance of finding near-native TF-DNA complex structures. This study also suggests that further improvement in protein-DNA docking research would require efforts from two integral aspects: improvement in computation efficiency and energy function design. Conclusions We present a high performance computing approach for improving the prediction accuracy of protein-DNA docking. The GPU-based docking algorithm accelerates the search of the conformational space and thus increases the chance of finding more near-native structures. To the best of our knowledge, this is the first ad hoc effort of applying GPU or GPU clusters to the protein-DNA docking problem. PMID:22759575

  8. Integrating sodium reduction strategies in the procurement process and contracting of food venues in the County of Los Angeles government, 2010-2012.

    PubMed

    Cummings, Patricia L; Kuo, Tony; Gase, Lauren N; Mugavero, Kristy

    2014-01-01

    Since sodium is ubiquitous in the food supply, recent approaches to sodium reduction have focused on increasing the availability of lower-sodium products through system-level and environmental changes. This article reviews integrated efforts by the Los Angeles County Sodium Reduction Initiative to implement these strategies at food venues in the County of Los Angeles government. The review used mixed methods, including a scan of the literature, key informant interviews, and lessons learned during 2010-2012 to assess program progress. Leveraging technical expertise and shared resources, the initiative strategically incorporated sodium reduction strategies into the overall work plan of a multipartnership food procurement program in Los Angeles County. To date, 3 County departments have incorporated new or updated nutrition requirements that included sodium limits and other strategies. The strategic coupling of sodium reduction to food procurement and general health promotion allowed for simultaneous advancement and acceleration of the County's sodium reduction agenda.

  9. Integrating Sodium Reduction Strategies in the Procurement Process and Contracting of Food Venues in the County of Los Angeles Government, 2010–2012

    PubMed Central

    Cummings, Patricia L.; Kuo, Tony; Gase, Lauren N.; Mugavero, Kristy

    2015-01-01

    Since sodium is ubiquitous in the food supply, recent approaches to sodium reduction have focused on increasing the availability of lower-sodium products through system-level and environmental changes. This article reviews integrated efforts by the Los Angeles County Sodium Reduction Initiative to implement these strategies at food venues in the County of Los Angeles government. The review used mixed methods, including a scan of the literature, key informant interviews, and lessons learned during 2010–2012 to assess program progress. Leveraging technical expertise and shared resources, the initiative strategically incorporated sodium reduction strategies into the overall work plan of a multipartnership food procurement program in Los Angeles County. To date, 3 County departments have incorporated new or updated nutrition requirements that included sodium limits and other strategies. The strategic coupling of sodium reduction to food procurement and general health promotion allowed for simultaneous advancement and acceleration of the County’s sodium reduction agenda. PMID:24322811

  10. TEACHING PHYSICS: Atwood's machine: experiments in an accelerating frame

    NASA Astrophysics Data System (ADS)

    Teck Chee, Chia; Hong, Chia Yee

    1999-03-01

    Experiments in an accelerating frame are often difficult to perform, but simple computer software allows sufficiently rapid and accurate measurements to be made on an arrangement of weights and pulleys known as Atwood's machine.

  11. Atwood's Machine: Experiments in an Accelerating Frame.

    ERIC Educational Resources Information Center

    Chee, Chia Teck; Hong, Chia Yee

    1999-01-01

    Experiments in an accelerating frame are hard to perform. Illustrates how simple computer software allows sufficiently rapid and accurate measurements to be made on an arrangement of weights and pulleys known as Atwood's machine. (Author/CCM)

  12. Information Retrieval Research and ESPRIT.

    ERIC Educational Resources Information Center

    Smeaton, Alan F.

    1987-01-01

    Describes the European Strategic Programme of Research and Development in Information Technology (ESPRIT), and its five programs: advanced microelectronics, software technology, advanced information processing, office systems, and computer integrated manufacturing. The emphasis on logic programming and ESPRIT as the European response to the…

  13. Sandia National Laboratories: Careers: Materials Science

    Science.gov Websites

    Technology Partnerships Business, Industry, & Non-Profits Government Universities Center for Development Agreement (CRADA) Strategic Partnership Projects, Non-Federal Entity (SPP/NFE) Agreements New Sandia's experimental, theoretical, and computational capabilities to establish the state of the art in

  14. Integrating Information & Communications Technologies into the Classroom

    ERIC Educational Resources Information Center

    Tomei, Lawrence, Ed.

    2007-01-01

    "Integrating Information & Communications Technologies Into the Classroom" examines topics critical to business, computer science, and information technology education, such as: school improvement and reform, standards-based technology education programs, data-driven decision making, and strategic technology education planning. This book also…

  15. Perspectives on pathway perturbation: Focused research to enhance 3R objectives

    EPA Science Inventory

    In vitro high-throughput screening (HTS) and in silico technologies are emerging as 21st century tools for hazard identification. Computational methods that strategically examine cross-species conservation of protein sequence/structural information for chemical molecular targets ...

  16. The role of strategies in motor learning

    PubMed Central

    Taylor, Jordan A.; Ivry, Richard B.

    2015-01-01

    There has been renewed interest in the role of strategies in sensorimotor learning. The combination of new behavioral methods and computational methods has begun to unravel the interaction between processes related to strategic control and processes related to motor adaptation. These processes may operate on very different error signals. Strategy learning is sensitive to goal-based performance error. In contrast, adaptation is sensitive to prediction errors between the desired and actual consequences of a planned movement. The former guides what the desired movement should be, whereas the latter guides how to implement the desired movement. Whereas traditional approaches have favored serial models in which an initial strategy-based phase gives way to more automatized forms of control, it now seems that strategic and adaptive processes operate with considerable independence throughout learning, although the relative weight given the two processes will shift with changes in performance. As such, skill acquisition involves the synergistic engagement of strategic and adaptive processes. PMID:22329960

  17. Level-2 Milestone 5588: Deliver Strategic Plan and Initial Scalability Assessment by Advanced Architecture and Portability Specialists Team

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draeger, Erik W.

    This report documents the fact that the work in creating a strategic plan and beginning customer engagements has been completed. The description of milestone is: The newly formed advanced architecture and portability specialists (AAPS) team will develop a strategic plan to meet the goals of 1) sharing knowledge and experience with code teams to ensure that ASC codes run well on new architectures, and 2) supplying skilled computational scientists to put the strategy into practice. The plan will be delivered to ASC management in the first quarter. By the fourth quarter, the team will identify their first customers within PEMmore » and IC, perform an initial assessment and scalability and performance bottleneck for next-generation architectures, and embed AAPS team members with customer code teams to assist with initial portability development within standalone kernels or proxy applications.« less

  18. The PIP-II Conceptual Design Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ball, M.; Burov, A.; Chase, B.

    2017-03-01

    The Proton Improvement Plan-II (PIP-II) encompasses a set of upgrades and improvements to the Fermilab accelerator complex aimed at supporting a world-leading neutrino program over the next several decades. PIP-II is an integral part of the strategic plan for U.S. High Energy Physics as described in the Particle Physics Project Prioritization Panel (P5) report of May 2014 and formalized through the Mission Need Statement approved in November 2015. As an immediate goal, PIP-II is focused on upgrades to the Fermilab accelerator complex capable of providing proton beam power in excess of 1 MW on target at the initiation of themore » Long Baseline Neutrino Facility/Deep Underground Neutrino Experiment (LBNF/DUNE) program, currently anticipated for the mid- 2020s. PIP-II is a part of a longer-term goal of establishing a high-intensity proton facility that is unique within the world, ultimately leading to multi-MW capabilities at Fermilab....« less

  19. Design of the new couplers for C-ADS RFQ

    NASA Astrophysics Data System (ADS)

    Shi, Ai-Min; Sun, Lie-Peng; Zhang, Zhou-Li; Xu, Xian-Bo; Shi, Long-Bo; Li, Chen-Xing; Wang, Wen-Bin

    2015-04-01

    A new special coupler with a kind of bowl-shaped ceramic window for a proton linear accelerator named the Chinese Accelerator Driven System (C-ADS) at the Institute of Modern Physics (IMP) has been simulated and constructed and a continuous wave (CW) beam commissioning through a four-meter long radio frequency quadruple (RFQ) was completed by the end of July 2014. In the experiments of conditioning and beam, some problems were promoted gradually such as sparking and thermal issues. Finally, two new couplers were passed with almost 110 kW CW power and 120 kW pulsed mode, respectively. The 10 mA intensity beam experiments have now been completed, and the couplers during the operation had no thermal or electro-magnetic problems. The detailed design and results are presented in the paper. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03020500)

  20. Internal controls over computer-processed financial data at Boeing Petroleum Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-02-14

    The Strategic Petroleum Reserve (SPR) is responsible for purchasing and storing crude oil to mitigate the potential adverse impact of any future disruptions in crude oil imports. Boeing Petroleum Services, Inc. (BPS) operates the SPR under a US Department of Energy (DOE) management and operating contract. BPS receives support for various information systems and other information processing needs from a mainframe computer center. The objective of the audit was to determine if the internal controls implemented by BPS for computer systems were adequate to assure processing reliability.

  1. Strategic Use of Microscrews for Enhancing the Accuracy of Computer-Guided Implant Surgery in Fully Edentulous Arches: A Case History Report.

    PubMed

    Lee, Du-Hyeong

    Implant guide systems can be classified by their supporting structure as tooth-, mucosa-, or bone-supported. Mucosa-supported guides for fully edentulous arches show lower accuracy in implant placement because of errors in image registration and guide positioning. This article introduces the application of a novel microscrew system for computer-aided implant surgery. This technique can markedly improve the accuracy of computer-guided implant surgery in fully edentulous arches by eliminating errors from image fusion and guide positioning.

  2. Defense Science Board 2006 Summer Study on 21st Century Strategic Technology Vectors. Volume 4. Accelerating the Transition of Technologies into U.S. Capabilities

    DTIC Science & Technology

    2007-04-01

    perform more research on future defense technology, the DOD should invest in companies that are leaders in the development of innovative sources of next...well. In fact, the kit from one vender out-performed the standard up-armor kits being produced for the Army’s acquisition team. That Army company ...subsequently purchased the company that had built the improved performance kit. As part of the process to look at alternatives, the Army Material

  3. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing

    PubMed Central

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-01

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343

  4. Monte Carlo method for calculating the radiation skyshine produced by electron accelerators

    NASA Astrophysics Data System (ADS)

    Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin

    2005-06-01

    Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.

  5. Summary Report of Working Group 2: Computation

    NASA Astrophysics Data System (ADS)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-01

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.

  6. Summary Report of Working Group 2: Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-22

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) newmore » hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.« less

  7. Real-time orthorectification by FPGA-based hardware acceleration

    NASA Astrophysics Data System (ADS)

    Kuo, David; Gordon, Don

    2010-10-01

    Orthorectification that corrects the perspective distortion of remote sensing imagery, providing accurate geolocation and ease of correlation to other images is a valuable first-step in image processing for information extraction. However, the large amount of metadata and the floating-point matrix transformations required to operate on each pixel make this a computation and I/O (Input/Output) intensive process. As result much imagery is either left unprocessed or loses timesensitive value in the long processing cycle. However, the computation on each pixel can be reduced substantially by using computational results of the neighboring pixels and accelerated by special pipelined hardware architecture in one to two orders of magnitude. A specialized coprocessor that is implemented inside an FPGA (Field Programmable Gate Array) chip and surrounded by vendorsupported hardware IP (Intellectual Property) shares the computation workload with CPU through PCI-Express interface. The ultimate speed of one pixel per clock (125 MHz) is achieved by the pipelined systolic array architecture. The optimal partition between software and hardware, the timing profile among image I/O and computation, and the highly automated GUI (Graphical User Interface) that fully exploits this speed increase to maximize overall image production throughput will also be discussed. The software that runs on a workstation with the acceleration hardware orthorectifies 16 Megapixels per second, which is 16 times faster than without the hardware. It turns the production time from months to days. A real-life successful story of an imaging satellite company that adopted such workstations for their orthorectified imagery production will be presented. The potential candidacy of the image processing computation that can be accelerated more efficiently by the same approach will also be analyzed.

  8. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  9. Design and performance frameworks for constructing problem-solving simulations.

    PubMed

    Stevens, Ron; Palacio-Cayetano, Joycelin

    2003-01-01

    Rapid advancements in hardware, software, and connectivity are helping to shorten the times needed to develop computer simulations for science education. These advancements, however, have not been accompanied by corresponding theories of how best to design and use these technologies for teaching, learning, and testing. Such design frameworks ideally would be guided less by the strengths/limitations of the presentation media and more by cognitive analyses detailing the goals of the tasks, the needs and abilities of students, and the resulting decision outcomes needed by different audiences. This article describes a problem-solving environment and associated theoretical framework for investigating how students select and use strategies as they solve complex science problems. A framework is first described for designing on-line problem spaces that highlights issues of content, scale, cognitive complexity, and constraints. While this framework was originally designed for medical education, it has proven robust and has been successfully applied to learning environments from elementary school through medical school. Next, a similar framework is detailed for collecting student performance and progress data that can provide evidence of students' strategic thinking and that could potentially be used to accelerate student progress. Finally, experimental validation data are presented that link strategy selection and use with other metrics of scientific reasoning and student achievement.

  10. Design and Performance Frameworks for Constructing Problem-Solving Simulations

    PubMed Central

    Stevens, Ron; Palacio-Cayetano, Joycelin

    2003-01-01

    Rapid advancements in hardware, software, and connectivity are helping to shorten the times needed to develop computer simulations for science education. These advancements, however, have not been accompanied by corresponding theories of how best to design and use these technologies for teaching, learning, and testing. Such design frameworks ideally would be guided less by the strengths/limitations of the presentation media and more by cognitive analyses detailing the goals of the tasks, the needs and abilities of students, and the resulting decision outcomes needed by different audiences. This article describes a problem-solving environment and associated theoretical framework for investigating how students select and use strategies as they solve complex science problems. A framework is first described for designing on-line problem spaces that highlights issues of content, scale, cognitive complexity, and constraints. While this framework was originally designed for medical education, it has proven robust and has been successfully applied to learning environments from elementary school through medical school. Next, a similar framework is detailed for collecting student performance and progress data that can provide evidence of students' strategic thinking and that could potentially be used to accelerate student progress. Finally, experimental validation data are presented that link strategy selection and use with other metrics of scientific reasoning and student achievement. PMID:14506505

  11. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    PubMed

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  12. GPU Accelerated Vector Median Filter

    NASA Technical Reports Server (NTRS)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  13. Educating and Training Accelerator Scientists and Technologists for Tomorrow

    NASA Astrophysics Data System (ADS)

    Barletta, William; Chattopadhyay, Swapan; Seryi, Andrei

    2012-01-01

    Accelerator science and technology is inherently an integrative discipline that combines aspects of physics, computational science, electrical and mechanical engineering. As few universities offer full academic programs, the education of accelerator physicists and engineers for the future has primarily relied on a combination of on-the-job training supplemented with intensive courses at regional accelerator schools. This article describes the approaches being used to satisfy the educational curiosity of a growing number of interested physicists and engineers.

  14. Educating and Training Accelerator Scientists and Technologists for Tomorrow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barletta, William A.; Chattopadhyay, Swapan; Seryi, Andrei

    2012-07-01

    Accelerator science and technology is inherently an integrative discipline that combines aspects of physics, computational science, electrical and mechanical engineering. As few universities offer full academic programs, the education of accelerator physicists and engineers for the future has primarily relied on a combination of on-the-job training supplemented with intense courses at regional accelerator schools. This paper describes the approaches being used to satisfy the educational interests of a growing number of interested physicists and engineers.

  15. Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, Chan; Mori, W.

    2013-10-21

    This is the final report on the DOE grant number DE-FG02-92ER40727 titled, “Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators.” During this grant period the UCLA program on Advanced Plasma Based Accelerators, headed by Professor C. Joshi has made many key scientific advances and trained a generation of students, many of whom have stayed in this research field and even started research programs of their own. In this final report however, we will focus on the last three years of the grant and report on the scientific progress made in each of the four tasksmore » listed under this grant. Four tasks are focused on: Plasma Wakefield Accelerator Research at FACET, SLAC National Accelerator Laboratory, In House Research at UCLA’s Neptune and 20 TW Laser Laboratories, Laser-Wakefield Acceleration (LWFA) in Self Guided Regime: Experiments at the Callisto Laser at LLNL, and Theory and Simulations. Major scientific results have been obtained in each of the four tasks described in this report. These have led to publications in the prestigious scientific journals, graduation and continued training of high quality Ph.D. level students and have kept the U.S. at the forefront of plasma-based accelerators research field.« less

  16. Comparison of Accelerated Testing with Modeling to Predict Lifetime of CPV Solder Layers (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silverman, T. J.; Bosco, N.; Kurtz, S.

    2012-03-01

    Concentrating photovoltaic (CPV) cell assemblies can fail due to thermomechanical fatigue in the die-attach layer. In this presentation, we show the latest results from our computational model of thermomechanical fatigue. The model is used to estimate the relative lifetime of cell assemblies exposed to various temperature histories consistent with service and with accelerated testing. We also present early results from thermal cycling experiments designed to help validate the computational model.

  17. Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale

    PubMed Central

    Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason

    2017-01-01

    With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft’s FPGA deployment in its Bing search engine and Intel’s 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems—like Apache Spark and Hadoop—to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster. PMID:28317049

  18. Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale.

    PubMed

    Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason

    2016-10-01

    With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft's FPGA deployment in its Bing search engine and Intel's 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems-like Apache Spark and Hadoop-to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster.

  19. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2017-01-01

    Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  20. Acceleration of color computer-generated hologram from three-dimensional scenes with texture and depth information

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2014-06-01

    We propose acceleration of color computer-generated holograms (CGHs) from three-dimensional (3D) scenes that are expressed as texture (RGB) and depth (D) images. These images are obtained by 3D graphics libraries and RGB-D cameras: for example, OpenGL and Kinect, respectively. We can regard them as two-dimensional (2D) cross-sectional images along the depth direction. The generation of CGHs from the 2D cross-sectional images requires multiple diffraction calculations. If we use convolution-based diffraction such as the angular spectrum method, the diffraction calculation takes a long time and requires large memory usage because the convolution diffraction calculation requires the expansion of the 2D cross-sectional images to avoid the wraparound noise. In this paper, we first describe the acceleration of the diffraction calculation using "Band-limited double-step Fresnel diffraction," which does not require the expansion. Next, we describe color CGH acceleration using color space conversion. In general, color CGHs are generated on RGB color space; however, we need to repeat the same calculation for each color component, so that the computational burden of the color CGH generation increases three-fold, compared with monochrome CGH generation. We can reduce the computational burden by using YCbCr color space because the 2D cross-sectional images on YCbCr color space can be down-sampled without the impairing of the image quality.

  1. Opportunities and challenges of cloud computing to improve health care services.

    PubMed

    Kuo, Alex Mu-Hsing

    2011-09-21

    Cloud computing is a new way of delivering computing resources and services. Many managers and experts believe that it can improve health care services, benefit health care research, and change the face of health information technology. However, as with any innovation, cloud computing should be rigorously evaluated before its widespread adoption. This paper discusses the concept and its current place in health care, and uses 4 aspects (management, technology, security, and legal) to evaluate the opportunities and challenges of this computing model. Strategic planning that could be used by a health organization to determine its direction, strategy, and resource allocation when it has decided to migrate from traditional to cloud-based health services is also discussed.

  2. KSC-99pp1226

    NASA Image and Video Library

    1999-10-06

    Nancy Nichols, principal of South Lake Elementary School, Titusville, Fla., joins students in teacher Michelle Butler's sixth grade class who are unwrapping computer equipment donated by Kennedy Space Center. South Lake is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  3. Manufacturing in space: Fluid dynamics numerical analysis

    NASA Technical Reports Server (NTRS)

    Robertson, S. J.; Nicholson, L. A.; Spradley, L. W.

    1982-01-01

    Numerical computations were performed for natural convection in circular enclosures under various conditions of acceleration. It was found that subcritical acceleration vectors applied in the direction of the temperature gradient will lead to an eventual state of rest regardless of the initial state of motion. Supercritical acceleration vectors will lead to the same steady state condition of motion regardless of the initial state of motion. Convection velocities were computed for acceleration vectors at various angles of the initial temperature gradient. The results for Rayleigh numbers of 1000 or less were found to closely follow Weinbaum's first order theory. Higher Rayleigh number results were shown to depart significantly from the first order theory. Supercritical behavior was confirmed for Rayleigh numbers greater than the known supercritical value of 9216. Response times were determined to provide an indication of the time required to change states of motion for the various cases considered.

  4. The BBN (Bolt Beranek and Newman) Knowledge Acquisition Project. Phase 1. Functional Description; Test Plan.

    DTIC Science & Technology

    1987-05-01

    Computers . " Symbolics. Inc. 8. Carnegie Group. Inc KnoiledgeCraft Carnegie Group, Inc.. 1985. .- 9. Moser, Margaret, An Overviev of NIKL. Section of BBN...ORGANIZATION NAME AND ADDRESS I0. PROGRAM ELEMENT. PROJECT. TASK BBN Laboratories Inc. AREAAWoRIUNTNUMER_ 10 Moulton St. Cambridge, MA 02238 It...knowledge representation, expert systems; strategic computing , . A 20 ABSTRACT (Contnue an r rerse ide If neceaesary and Identify by block number) This

  5. Laboratory for Computer Science Progress Report 21, July 1983-June 1984.

    DTIC Science & Technology

    1984-06-01

    Systems 269 4. Distributed Consensus 270 5. Election of a Leader in a Distributed Ring of Processors 273 6. Distributed Network Algorithms 274 7. Diagnosis...multiprocessor systems. This facility, funded by the new!y formed Strategic Computing Program of the Defense Advanced Research Projects Agency, will enable...Academic Staff P. Szo)ovits, Group Leader R. Patil Collaborating Investigators M. Criscitiello, M.D., Tufts-New England Medical Center Hospital R

  6. The Operational Movement Planning System: A Prototype for the Strategic Command Function

    DTIC Science & Technology

    1993-06-01

    environment. The White Paper identifies "computer based systems to support the decision making of operational and higher level commanders" as an important...exist and objective decisions can be made. When extending the application of computers into the upper levels of an organisation higher productivity...thCtaspot. aiinssetstnttt dtrm In his magstepatecapsables tran lsptort O assets o ahie umr r dniid eemnn capabilty is avery coplex prcess . Cpabilit reuie

  7. South Lake Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Nancy Nichols, principal of South Lake Elementary School, Titusville, Fla., joins students in teacher Michelle Butler's sixth grade class who are unwrapping computer equipment donated by Kennedy Space Center. South Lake is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  8. Cambridge Elementary students enjoy gift of computers

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Children at Cambridge Elementary School, Cocoa, Fla., eagerly unwrap computer equipment donated by Kennedy Space Center. Cambridge is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. Behind the children is Jim Thurston, a school volunteer and retired employee of USBI, who shared in the project. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. Ksc employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated.

  9. Sandia National Laboratories: National Security Missions: Nuclear Weapons

    Science.gov Websites

    Technology Partnerships Business, Industry, & Non-Profits Government Universities Center for Development Agreement (CRADA) Strategic Partnership Projects, Non-Federal Entity (SPP/NFE) Agreements New , in which fundamental science, computer models, and unique experimental facilities come together so

  10. Command, Control, Communication, Computers and Information Technology (C4&IT). Strategic Plan, FY2008 - 2012

    DTIC Science & Technology

    2008-01-01

    Intentionally Blank 5 Table of Contents INTRODUCTION...18 Goal 5 : Organizational Excellence...fully realized in the next 5 years, it is clear that coordinated activity must occur now to improve the Coast Guard’s operational capabilities

  11. Calculation reduction method for color digital holography and computer-generated hologram using color space conversion

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Nagahama, Yuki; Kakue, Takashi; Takada, Naoki; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Hiyama, Daisuke; Ito, Tomoyoshi

    2014-02-01

    A calculation reduction method for color digital holography (DH) and computer-generated holograms (CGHs) using color space conversion is reported. Color DH and color CGHs are generally calculated on RGB space. We calculate color DH and CGHs in other color spaces for accelerating the calculation (e.g., YCbCr color space). In YCbCr color space, a RGB image or RGB hologram is converted to the luminance component (Y), blue-difference chroma (Cb), and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance component is well recognized, the difference of the other components is not. In this method, the luminance component is normal sampled and the chroma components are down-sampled. The down-sampling allows us to accelerate the calculation of the color DH and CGHs. We compute diffraction calculations from the components, and then we convert the diffracted results in YCbCr color space to RGB color space. The proposed method, which is possible to accelerate the calculations up to a factor of 3 in theory, accelerates the calculation over two times faster than the ones in RGB color space.

  12. Parameter investigation with line-implicit lower-upper symmetric Gauss-Seidel on 3D stretched grids

    NASA Astrophysics Data System (ADS)

    Otero, Evelyn; Eliasson, Peter

    2015-03-01

    An implicit lower-upper symmetric Gauss-Seidel (LU-SGS) solver has been implemented as a multigrid smoother combined with a line-implicit method as an acceleration technique for Reynolds-averaged Navier-Stokes (RANS) simulation on stretched meshes. The computational fluid dynamics code concerned is Edge, an edge-based finite volume Navier-Stokes flow solver for structured and unstructured grids. The paper focuses on the investigation of the parameters related to our novel line-implicit LU-SGS solver for convergence acceleration on 3D RANS meshes. The LU-SGS parameters are defined as the Courant-Friedrichs-Lewy number, the left-hand side dissipation, and the convergence of iterative solution of the linear problem arising from the linearisation of the implicit scheme. The influence of these parameters on the overall convergence is presented and default values are defined for maximum convergence acceleration. The optimised settings are applied to 3D RANS computations for comparison with explicit and line-implicit Runge-Kutta smoothing. For most of the cases, a computing time acceleration of the order of 2 is found depending on the mesh type, namely the boundary layer and the magnitude of residual reduction.

  13. Integrating computation into the undergraduate curriculum: A vision and guidelines for future developments

    NASA Astrophysics Data System (ADS)

    Chonacky, Norman; Winch, David

    2008-04-01

    There is substantial evidence of a need to make computation an integral part of the undergraduate physics curriculum. This need is consistent with data from surveys in both the academy and the workplace, and has been reinforced by two years of exploratory efforts by a group of physics faculty for whom computation is a special interest. We have examined past and current efforts at reform and a variety of strategic, organizational, and institutional issues involved in any attempt to broadly transform existing practice. We propose a set of guidelines for development based on this past work and discuss our vision of computationally integrated physics.

  14. Acceleration of discrete stochastic biochemical simulation using GPGPU.

    PubMed

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.

  15. Acceleration of discrete stochastic biochemical simulation using GPGPU

    PubMed Central

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936

  16. Employing OpenCL to Accelerate Ab Initio Calculations on Graphics Processing Units.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2017-06-13

    We present an extension of our graphics processing units (GPU)-accelerated quantum chemistry package to employ OpenCL compute kernels, which can be executed on a wide range of computing devices like CPUs, Intel Xeon Phi, and AMD GPUs. Here, we focus on the use of AMD GPUs and discuss differences as compared to CUDA-based calculations on NVIDIA GPUs. First illustrative timings are presented for hybrid density functional theory calculations using serial as well as parallel compute environments. The results show that AMD GPUs are as fast or faster than comparable NVIDIA GPUs and provide a viable alternative for quantum chemical applications.

  17. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  18. Multi-GPU Jacobian accelerated computing for soft-field tomography.

    PubMed

    Borsic, A; Attardo, E A; Halter, R J

    2012-10-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on four GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 min to 14 s. We regard this as an important step toward gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for EIT, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the adjoint method.

  19. Multi-GPU Jacobian Accelerated Computing for Soft Field Tomography

    PubMed Central

    Borsic, A.; Attardo, E. A.; Halter, R. J.

    2012-01-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use Finite Element Models to represent the volume of interest and to solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are three-dimensional. Though the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in Electrical Impedance Tomography applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15 to 20 minutes with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Further, providing high-speed reconstructions are essential for some promising clinical application of EIT. For 3D problems 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In the present work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have a much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on 4 GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 minutes to 14 seconds. We regard this as an important step towards gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for Electrical Impedance Tomography, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the Adjoint Method. PMID:23010857

  20. Computer modeling of test particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Decker, Robert B.

    1988-01-01

    The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.

  1. Dissociable contribution of prefrontal and striatal dopaminergic genes to learning in economic games

    PubMed Central

    Set, Eric; Saez, Ignacio; Zhu, Lusha; Houser, Daniel E.; Myung, Noah; Zhong, Songfa; Ebstein, Richard P.; Chew, Soo Hong; Hsu, Ming

    2014-01-01

    Game theory describes strategic interactions where success of players’ actions depends on those of coplayers. In humans, substantial progress has been made at the neural level in characterizing the dopaminergic and frontostriatal mechanisms mediating such behavior. Here we combined computational modeling of strategic learning with a pathway approach to characterize association of strategic behavior with variations in the dopamine pathway. Specifically, using gene-set analysis, we systematically examined contribution of different dopamine genes to variation in a multistrategy competitive game captured by (i) the degree players anticipate and respond to actions of others (belief learning) and (ii) the speed with which such adaptations take place (learning rate). We found that variation in genes that primarily regulate prefrontal dopamine clearance—catechol-O-methyl transferase (COMT) and two isoforms of monoamine oxidase—modulated degree of belief learning across individuals. In contrast, we did not find significant association for other genes in the dopamine pathway. Furthermore, variation in genes that primarily regulate striatal dopamine function—dopamine transporter and D2 receptors—was significantly associated with the learning rate. We found that this was also the case with COMT, but not for other dopaminergic genes. Together, these findings highlight dissociable roles of frontostriatal systems in strategic learning and support the notion that genetic variation, organized along specific pathways, forms an important source of variation in complex phenotypes such as strategic behavior. PMID:24979760

  2. Dissociable contribution of prefrontal and striatal dopaminergic genes to learning in economic games.

    PubMed

    Set, Eric; Saez, Ignacio; Zhu, Lusha; Houser, Daniel E; Myung, Noah; Zhong, Songfa; Ebstein, Richard P; Chew, Soo Hong; Hsu, Ming

    2014-07-01

    Game theory describes strategic interactions where success of players' actions depends on those of coplayers. In humans, substantial progress has been made at the neural level in characterizing the dopaminergic and frontostriatal mechanisms mediating such behavior. Here we combined computational modeling of strategic learning with a pathway approach to characterize association of strategic behavior with variations in the dopamine pathway. Specifically, using gene-set analysis, we systematically examined contribution of different dopamine genes to variation in a multistrategy competitive game captured by (i) the degree players anticipate and respond to actions of others (belief learning) and (ii) the speed with which such adaptations take place (learning rate). We found that variation in genes that primarily regulate prefrontal dopamine clearance--catechol-O-methyl transferase (COMT) and two isoforms of monoamine oxidase--modulated degree of belief learning across individuals. In contrast, we did not find significant association for other genes in the dopamine pathway. Furthermore, variation in genes that primarily regulate striatal dopamine function--dopamine transporter and D2 receptors--was significantly associated with the learning rate. We found that this was also the case with COMT, but not for other dopaminergic genes. Together, these findings highlight dissociable roles of frontostriatal systems in strategic learning and support the notion that genetic variation, organized along specific pathways, forms an important source of variation in complex phenotypes such as strategic behavior.

  3. Strategic management of technostress. The chaining of Prometheus.

    PubMed

    Caro, D H; Sethi, A S

    1985-12-01

    The article proposes the concept of technostress and makes a strong recommendation for conducting research based on key researchable hypotheses. A conceptual framework of technostress is suggested to provide some focus to future research. A number of technostress management strategies are put forward, including strategic technological planning, organization culture development, technostress monitoring systems, and technouser self-development programs. The management of technostress is compared to the chaining of Prometheus, which, left uncontrolled, can create havoc in an organization. The authors believe that organizations have a responsibility to introduce, diffuse, and manage computer technology in such a way that it is congruent with the principles of sound, supportive, and humanistic management.

  4. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  5. Solving global shallow water equations on heterogeneous supercomputers

    PubMed Central

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved. PMID:28282428

  6. Generating clock signals for a cycle accurate, cycle reproducible FPGA based hardware accelerator

    DOEpatents

    Asaad, Sameth W.; Kapur, Mohit

    2016-01-05

    A method, system and computer program product are disclosed for generating clock signals for a cycle accurate FPGA based hardware accelerator used to simulate operations of a device-under-test (DUT). In one embodiment, the DUT includes multiple device clocks generating multiple device clock signals at multiple frequencies and at a defined frequency ratio; and the FPG hardware accelerator includes multiple accelerator clocks generating multiple accelerator clock signals to operate the FPGA hardware accelerator to simulate the operations of the DUT. In one embodiment, operations of the DUT are mapped to the FPGA hardware accelerator, and the accelerator clock signals are generated at multiple frequencies and at the defined frequency ratio of the frequencies of the multiple device clocks, to maintain cycle accuracy between the DUT and the FPGA hardware accelerator. In an embodiment, the FPGA hardware accelerator may be used to control the frequencies of the multiple device clocks.

  7. Chasing a Fault across Ship and Shore

    ERIC Educational Resources Information Center

    Evans, Michael A.; Schwen, Thomas M.

    2006-01-01

    Knowledge management (KM) in the U.S. Navy is championed as a strategic initiative to improve shipboard maintenance and troubleshooting at a distance. The approach requires capturing, coordinating, and distributing domain expertise in electronics and computer engineering via advanced information and communication technologies. Coordination must be…

  8. Healthcare's Future: Strategic Investment in Technology.

    PubMed

    Franklin, Michael A

    2018-01-01

    Recent and rapid advances in the implementation of technology have greatly affected the quality and efficiency of healthcare delivery in the United States. Simultaneously, diverse generational pressures-including the consumerism of millennials and unsustainable growth in the costs of care for baby boomers-have accelerated a revolution in healthcare delivery that was marked in 2010 by the passage of the Affordable Care Act.Against this backdrop, Maryland and the Centers for Medicare & Medicaid Services entered into a partnership in 2014 to modernize the Maryland All-Payer Model. Under this architecture, each Maryland hospital negotiates a global budget revenue agreement with the state's rate-setting agency, limiting the hospital's annual revenue to the budgetary cap established by the state.At Atlantic General Hospital (AGH), leaders had established a disciplined strategic planning process in which the board of trustees, medical staff, and administration annually agree on goals and initiatives to achieve the objectives set forth in its five-year strategic plans. This article describes two initiatives to improve care using technology. In 2006, AGH introduced a service guarantee in the emergency room (ER); the ER 30-Minute Promise assures patients that they will be placed in a bed or receive care within 30 minutes of arrival in the ER. In 2007, several independent hospitals in the state formed Maryland eCare to jointly contract for intensive care unit (ICU) physician coverage via telemedicine. This technology allows clinical staff to continuously monitor ICU patients remotely. The positive results of the ER 30-Minute Promise and Maryland eCare program show that technological advances in an independent, small, rural hospital can make a significant impact on its ability to maintain independence. AGH's strategic investments prepared the organization well for the transition in 2014 to a value-based payment system.

  9. Saving Water at Los Alamos National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Andy

    Los Alamos National Laboratory decreased its water usage by 26 percent in 2014, with about one-third of the reduction attributable to using reclaimed water to cool a supercomputing center. The Laboratory's goal during 2014 was to use only re-purposed water to support the mission at the Strategic Computing Complex. Using reclaimed water from the Sanitary Effluent Reclamation Facility, or SERF, substantially decreased water usage and supported the overall mission. SERF collects industrial wastewater and treats it for reuse. The reclamation facility contributed more than 27 million gallons of re-purposed water to the Laboratory's computing center, a secured supercomputing facility thatmore » supports the Laboratory’s national security mission and is one of the institution’s larger water users. In addition to the strategic water reuse program at SERF, the Laboratory reduced water use in 2014 by focusing conservation efforts on areas that use the most water, upgrading to water-conserving fixtures, and repairing leaks identified in a biennial survey.« less

  10. Saving Water at Los Alamos National Laboratory

    ScienceCinema

    Erickson, Andy

    2018-01-16

    Los Alamos National Laboratory decreased its water usage by 26 percent in 2014, with about one-third of the reduction attributable to using reclaimed water to cool a supercomputing center. The Laboratory's goal during 2014 was to use only re-purposed water to support the mission at the Strategic Computing Complex. Using reclaimed water from the Sanitary Effluent Reclamation Facility, or SERF, substantially decreased water usage and supported the overall mission. SERF collects industrial wastewater and treats it for reuse. The reclamation facility contributed more than 27 million gallons of re-purposed water to the Laboratory's computing center, a secured supercomputing facility that supports the Laboratory’s national security mission and is one of the institution’s larger water users. In addition to the strategic water reuse program at SERF, the Laboratory reduced water use in 2014 by focusing conservation efforts on areas that use the most water, upgrading to water-conserving fixtures, and repairing leaks identified in a biennial survey.

  11. Supersonics/Airport Noise Plan: An Evolutionary Roadmap

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2011-01-01

    This presentation discusses the Plan for the Airport Noise Tech Challenge Area of the Supersonics Project. It is given in the context of strategic planning exercises being done in other Projects to show the strategic aspects of the Airport Noise plan rather than detailed task lists. The essence of this strategic view is the decomposition of the research plan by Concept and by Tools. Tools (computational, experimental) is the description of the plan that resources (such as researchers) most readily identify with, while Concepts (here noise reduction technologies or aircraft configurations) is the aspects that project management and outside reviewers most appreciate as deliverables and milestones. By carefully cross-linking these so that Concepts are addressed sequentially (roughly one after another) by researchers developing/applying their Tools simultaneously (in parallel with one another), the researchers can deliver milestones at a reasonable pace while doing the longer-term development that most Tools in the aeroacoustics science require. An example of this simultaneous application of tools was given for the Concept of High Aspect Ratio Nozzles. The presentation concluded with a few ideas on how this strategic view could be applied to the Subsonic Fixed Wing Project's Quiet Aircraft Tech Challenge Area as it works through its current roadmapping exercise.

  12. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    NASA Astrophysics Data System (ADS)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  13. Accelerating Scientific Advancement for Pediatric Rare Lung Disease Research. Report from a National Institutes of Health-NHLBI Workshop, September 3 and 4, 2015.

    PubMed

    Young, Lisa R; Trapnell, Bruce C; Mandl, Kenneth D; Swarr, Daniel T; Wambach, Jennifer A; Blaisdell, Carol J

    2016-12-01

    Pediatric rare lung disease (PRLD) is a term that refers to a heterogeneous group of rare disorders in children. In recent years, this field has experienced significant progress marked by scientific discoveries, multicenter and interdisciplinary collaborations, and efforts of patient advocates. Although genetic mechanisms underlie many PRLDs, pathogenesis remains uncertain for many of these disorders. Furthermore, epidemiology and natural history are insufficiently defined, and therapies are limited. To develop strategies to accelerate scientific advancement for PRLD research, the NHLBI of the National Institutes of Health convened a strategic planning workshop on September 3 and 4, 2015. The workshop brought together a group of scientific experts, intramural and extramural investigators, and advocacy groups with the following objectives: (1) to discuss the current state of PRLD research; (2) to identify scientific gaps and barriers to increasing research and improving outcomes for PRLDs; (3) to identify technologies, tools, and reagents that could be leveraged to accelerate advancement of research in this field; and (4) to develop priorities for research aimed at improving patient outcomes and quality of life. This report summarizes the workshop discussion and provides specific recommendations to guide future research in PRLD.

  14. STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Geoffrey; Jha, Shantenu; Ramakrishnan, Lavanya

    The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), weremore » conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report discusses four research directions driven by current and future application requirements reflecting the areas identified as important by STREAM2016. These include (i) Algorithms, (ii) Programming Models, Languages and Runtime Systems (iii) Human-in-the-loop and Steering in Scientific Workflow and (iv) Facilities.« less

  15. Accelerating image reconstruction in dual-head PET system by GPU and symmetry properties.

    PubMed

    Chou, Cheng-Ying; Dong, Yun; Hung, Yukai; Kao, Yu-Jiun; Wang, Weichung; Kao, Chien-Min; Chen, Chin-Tu

    2012-01-01

    Positron emission tomography (PET) is an important imaging modality in both clinical usage and research studies. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads, which produce more than 224 million lines of response and thus request dramatic computational demands. In this work, we employed a state-of-the-art graphics processing unit (GPU), NVIDIA Tesla C2070, to yield an efficient reconstruction process. Our approaches ingeniously integrate the distinguished features of the symmetry properties of the imaging system and GPU architectures, including block/warp/thread assignments and effective memory usage, to accelerate the computations for ordered subset expectation maximization (OSEM) image reconstruction. The OSEM reconstruction algorithms were implemented employing both CPU-based and GPU-based codes, and their computational performance was quantitatively analyzed and compared. The results showed that the GPU-accelerated scheme can drastically reduce the reconstruction time and thus can largely expand the applicability of the dual-head PET system.

  16. Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems

    PubMed Central

    Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.

    2014-01-01

    The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545

  17. LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN

    NASA Astrophysics Data System (ADS)

    Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor

    2017-12-01

    The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.

  18. Accelerating MP2C dispersion corrections for dimers and molecular crystals

    NASA Astrophysics Data System (ADS)

    Huang, Yuanhang; Shao, Yihan; Beran, Gregory J. O.

    2013-06-01

    The MP2C dispersion correction of Pitonak and Hesselmann [J. Chem. Theory Comput. 6, 168 (2010)], 10.1021/ct9005882 substantially improves the performance of second-order Møller-Plesset perturbation theory for non-covalent interactions, albeit with non-trivial computational cost. Here, the MP2C correction is computed in a monomer-centered basis instead of a dimer-centered one. When applied to a single dimer MP2 calculation, this change accelerates the MP2C dispersion correction several-fold while introducing only trivial new errors. More significantly, in the context of fragment-based molecular crystal studies, combination of the new monomer basis algorithm and the periodic symmetry of the crystal reduces the cost of computing the dispersion correction by two orders of magnitude. This speed-up reduces the MP2C dispersion correction calculation from a significant computational expense to a negligible one in crystals like aspirin or oxalyl dihydrazide, without compromising accuracy.

  19. A New Look at NASA: Strategic Research In Information Technology

    NASA Technical Reports Server (NTRS)

    Alfano, David; Tu, Eugene (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on research undertaken by NASA to facilitate the development of information technologies. Specific ideas covered here include: 1) Bio/nano technologies: biomolecular and nanoscale systems and tools for assembly and computing; 2) Evolvable hardware: autonomous self-improving, self-repairing hardware and software for survivable space systems in extreme environments; 3) High Confidence Software Technologies: formal methods, high-assurance software design, and program synthesis; 4) Intelligent Controls and Diagnostics: Next generation machine learning, adaptive control, and health management technologies; 5) Revolutionary computing: New computational models to increase capability and robustness to enable future NASA space missions.

  20. Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert D.

    2006-08-10

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now takemore » hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.« less

  1. ELECTRON ACCELERATION IN PULSAR-WIND TERMINATION SHOCKS: AN APPLICATION TO THE CRAB NEBULA GAMMA-RAY FLARES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroon, John J.; Becker, Peter A.; Dermer, Charles D.

    The γ -ray flares from the Crab Nebula observed by AGILE and Fermi -LAT reaching GeV energies and lasting several days challenge the standard models for particle acceleration in pulsar-wind nebulae because the radiating electrons have energies exceeding the classical radiation-reaction limit for synchrotron. Previous modeling has suggested that the synchrotron limit can be exceeded if the electrons experience electrostatic acceleration, but the resulting spectra do not agree very well with the data. As a result, there are still some important unanswered questions about the detailed particle acceleration and emission processes occurring during the flares. We revisit the problem usingmore » a new analytical approach based on an electron transport equation that includes terms describing electrostatic acceleration, stochastic wave-particle acceleration, shock acceleration, synchrotron losses, and particle escape. An exact solution is obtained for the electron distribution, which is used to compute the associated γ -ray synchrotron spectrum. We find that in our model the γ -ray flares are mainly powered by electrostatic acceleration, but the contributions from stochastic and shock acceleration play an important role in producing the observed spectral shapes. Our model can reproduce the spectra of all the Fermi -LAT and AGILE flares from the Crab Nebula, using magnetic field strengths in agreement with the multi-wavelength observational constraints. We also compute the spectrum and duration of the synchrotron afterglow created by the accelerated electrons, after they escape into the region on the downstream side of the pulsar-wind termination shock. The afterglow is expected to fade over a maximum period of about three weeks after the γ -ray flare.« less

  2. Extraordinary tools for extraordinary science: the impact of SciDAC on accelerator science and technology

    NASA Astrophysics Data System (ADS)

    Ryne, Robert D.

    2006-09-01

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.

  3. CPU-GPU hybrid accelerating the Zuker algorithm for RNA secondary structure prediction applications.

    PubMed

    Lei, Guoqing; Dou, Yong; Wan, Wen; Xia, Fei; Li, Rongchun; Ma, Meng; Zou, Dan

    2012-01-01

    Prediction of ribonucleic acid (RNA) secondary structure remains one of the most important research areas in bioinformatics. The Zuker algorithm is one of the most popular methods of free energy minimization for RNA secondary structure prediction. Thus far, few studies have been reported on the acceleration of the Zuker algorithm on general-purpose processors or on extra accelerators such as Field Programmable Gate-Array (FPGA) and Graphics Processing Units (GPU). To the best of our knowledge, no implementation combines both CPU and extra accelerators, such as GPUs, to accelerate the Zuker algorithm applications. In this paper, a CPU-GPU hybrid computing system that accelerates Zuker algorithm applications for RNA secondary structure prediction is proposed. The computing tasks are allocated between CPU and GPU for parallel cooperate execution. Performance differences between the CPU and the GPU in the task-allocation scheme are considered to obtain workload balance. To improve the hybrid system performance, the Zuker algorithm is optimally implemented with special methods for CPU and GPU architecture. Speedup of 15.93× over optimized multi-core SIMD CPU implementation and performance advantage of 16% over optimized GPU implementation are shown in the experimental results. More than 14% of the sequences are executed on CPU in the hybrid system. The system combining CPU and GPU to accelerate the Zuker algorithm is proven to be promising and can be applied to other bioinformatics applications.

  4. 76 FR 67418 - Request for Comments on NIST Special Publication 500-293, US Government Cloud Computing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-01

    ...-1659-01] Request for Comments on NIST Special Publication 500-293, US Government Cloud Computing... Publication 500-293, US Government Cloud Computing Technology Roadmap, Release 1.0 (Draft). This document is... (USG) agencies to accelerate their adoption of cloud computing. The roadmap has been developed through...

  5. Opportunities and Challenges of Cloud Computing to Improve Health Care Services

    PubMed Central

    2011-01-01

    Cloud computing is a new way of delivering computing resources and services. Many managers and experts believe that it can improve health care services, benefit health care research, and change the face of health information technology. However, as with any innovation, cloud computing should be rigorously evaluated before its widespread adoption. This paper discusses the concept and its current place in health care, and uses 4 aspects (management, technology, security, and legal) to evaluate the opportunities and challenges of this computing model. Strategic planning that could be used by a health organization to determine its direction, strategy, and resource allocation when it has decided to migrate from traditional to cloud-based health services is also discussed. PMID:21937354

  6. KSC-99pp1228

    NASA Image and Video Library

    1999-10-06

    Children at Cambridge Elementary School, Cocoa, Fla., eagerly unwrap computer equipment donated by Kennedy Space Center. Cambridge is one of 13 Brevard County schools receiving 81 excess contractor computers thanks to an innovative educational outreach project spearheaded by the Nasa k-12 Education Services Office at ksc. Behind the children is Jim Thurston, a school volunteer and retired employee of USBI, who shared in the project. The Astronaut Memorial Foundation, a strategic partner in the effort, and several schools in rural Florida and Georgia also received refurbished computers as part of the year-long project. KSC employees put in about 3,300 volunteer hours to transform old, excess computers into upgraded, usable units. A total of $90,000 in upgraded computer equipment is being donated

  7. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  8. The Health Improvement Network (THIN)

    Cancer.gov

    The Health Improvement Network is a collaboration between Cegedim Strategic Data EPIC, an expert in the provision of UK primary care patient data that is used for medical research, and In Practice Systems (InPS), who continue to develop and supply the widely-used Vision general practice computer system.

  9. The Strategic Nature of Changing Your Mind

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Anderson, John R.

    2009-01-01

    In two experiments, we studied how people's strategy choices emerge through an initial and then a more considered evaluation of available strategies. The experiments employed a computer-based paradigm where participants solved multiplication problems using mental and calculator solutions. In addition to recording responses and solution times, we…

  10. Networking at NASA. Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Garman, John R.

    1991-01-01

    A series of viewgraphs on computer networks at the Johnson Space Center (JSC) are given. Topics covered include information resource management (IRM) at JSC, the IRM budget by NASA center, networks evolution, networking as a strategic tool, the Information Services Directorate charter, and SSC network requirements, challenges, and status.

  11. Research and Development in Natural Language Understanding as Part of the Strategic Computing Program.

    DTIC Science & Technology

    1987-04-01

    facilities. BBN is developing a series of increasingly sophisticated natural language understanding systems which will serve as an integrated interface...Haas, A.R. A Syntactic Theory of Belief and Action. Artificial Intelligence. 1986. Forthcoming. [6] Hinrichs, E. Temporale Anaphora im Englischen

  12. Strategic Use of Modality during Synchronous CMC

    ERIC Educational Resources Information Center

    Sauro, Shannon

    2009-01-01

    Research on computer-mediated communication (CMC) in the second language (L2) classroom has revealed the potential for technology to promote learner interaction and opportunities for negotiation of meaning as well as to provide opportunities for language access outside the classroom environment. Despite this potential, social, linguistic, and…

  13. Keeping an Eye on the Prize

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazi, A U

    2007-02-06

    Setting performance goals is part of the business plan for almost every company. The same is true in the world of supercomputers. Ten years ago, the Department of Energy (DOE) launched the Accelerated Strategic Computing Initiative (ASCI) to help ensure the safety and reliability of the nation's nuclear weapons stockpile without nuclear testing. ASCI, which is now called the Advanced Simulation and Computing (ASC) Program and is managed by DOE's National Nuclear Security Administration (NNSA), set an initial 10-year goal to obtain computers that could process up to 100 trillion floating-point operations per second (teraflops). Many computer experts thought themore » goal was overly ambitious, but the program's results have proved them wrong. Last November, a Livermore-IBM team received the 2005 Gordon Bell Prize for achieving more than 100 teraflops while modeling the pressure-induced solidification of molten metal. The prestigious prize, which is named for a founding father of supercomputing, is awarded each year at the Supercomputing Conference to innovators who advance high-performance computing. Recipients for the 2005 prize included six Livermore scientists--physicists Fred Streitz, James Glosli, and Mehul Patel and computer scientists Bor Chan, Robert Yates, and Bronis de Supinski--as well as IBM researchers James Sexton and John Gunnels. This team produced the first atomic-scale model of metal solidification from the liquid phase with results that were independent of system size. The record-setting calculation used Livermore's domain decomposition molecular-dynamics (ddcMD) code running on BlueGene/L, a supercomputer developed by IBM in partnership with the ASC Program. BlueGene/L reached 280.6 teraflops on the Linpack benchmark, the industry standard used to measure computing speed. As a result, it ranks first on the list of Top500 Supercomputer Sites released in November 2005. To evaluate the performance of nuclear weapons systems, scientists must understand how materials behave under extreme conditions. Because experiments at high pressures and temperatures are often difficult or impossible to conduct, scientists rely on computer models that have been validated with obtainable data. Of particular interest to weapons scientists is the solidification of metals. ''To predict the performance of aging nuclear weapons, we need detailed information on a material's phase transitions'', says Streitz, who leads the Livermore-IBM team. For example, scientists want to know what happens to a metal as it changes from molten liquid to a solid and how that transition affects the material's characteristics, such as its strength.« less

  14. Acceleration of GPU-based Krylov solvers via data transfer reduction

    DOE PAGES

    Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...

    2015-04-08

    Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less

  15. Computational thinking and thinking about computing

    PubMed Central

    Wing, Jeannette M.

    2008-01-01

    Computational thinking will influence everyone in every field of endeavour. This vision poses a new educational challenge for our society, especially for our children. In thinking about computing, we need to be attuned to the three drivers of our field: science, technology and society. Accelerating technological advances and monumental societal demands force us to revisit the most basic scientific questions of computing. PMID:18672462

  16. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne

    2011-11-01

    We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.

  17. Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fermilab

    2017-09-01

    Scientists, engineers and programmers at Fermilab are tackling today’s most challenging computational problems. Their solutions, motivated by the needs of worldwide research in particle physics and accelerators, help America stay at the forefront of innovation.

  18. GPU-accelerated Lattice Boltzmann method for anatomical extraction in patient-specific computational hemodynamics

    NASA Astrophysics Data System (ADS)

    Yu, H.; Wang, Z.; Zhang, C.; Chen, N.; Zhao, Y.; Sawchuk, A. P.; Dalsing, M. C.; Teague, S. D.; Cheng, Y.

    2014-11-01

    Existing research of patient-specific computational hemodynamics (PSCH) heavily relies on software for anatomical extraction of blood arteries. Data reconstruction and mesh generation have to be done using existing commercial software due to the gap between medical image processing and CFD, which increases computation burden and introduces inaccuracy during data transformation thus limits the medical applications of PSCH. We use lattice Boltzmann method (LBM) to solve the level-set equation over an Eulerian distance field and implicitly and dynamically segment the artery surfaces from radiological CT/MRI imaging data. The segments seamlessly feed to the LBM based CFD computation of PSCH thus explicit mesh construction and extra data management are avoided. The LBM is ideally suited for GPU (graphic processing unit)-based parallel computing. The parallel acceleration over GPU achieves excellent performance in PSCH computation. An application study will be presented which segments an aortic artery from a chest CT dataset and models PSCH of the segmented artery.

  19. Acceleration of the matrix multiplication of Radiance three phase daylighting simulations with parallel computing on heterogeneous hardware of personal computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael

    2013-05-23

    Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less

  20. Quantum Chemical Calculations Using Accelerators: Migrating Matrix Operations to the NVIDIA Kepler GPU and the Intel Xeon Phi.

    PubMed

    Leang, Sarom S; Rendell, Alistair P; Gordon, Mark S

    2014-03-11

    Increasingly, modern computer systems comprise a multicore general-purpose processor augmented with a number of special purpose devices or accelerators connected via an external interface such as a PCI bus. The NVIDIA Kepler Graphical Processing Unit (GPU) and the Intel Phi are two examples of such accelerators. Accelerators offer peak performances that can be well above those of the host processor. How to exploit this heterogeneous environment for legacy application codes is not, however, straightforward. This paper considers how matrix operations in typical quantum chemical calculations can be migrated to the GPU and Phi systems. Double precision general matrix multiply operations are endemic in electronic structure calculations, especially methods that include electron correlation, such as density functional theory, second order perturbation theory, and coupled cluster theory. The use of approaches that automatically determine whether to use the host or an accelerator, based on problem size, is explored, with computations that are occurring on the accelerator and/or the host. For data-transfers over PCI-e, the GPU provides the best overall performance for data sizes up to 4096 MB with consistent upload and download rates between 5-5.6 GB/s and 5.4-6.3 GB/s, respectively. The GPU outperforms the Phi for both square and nonsquare matrix multiplications.

  1. ISEES: an institute for sustainable software to accelerate environmental science

    NASA Astrophysics Data System (ADS)

    Jones, M. B.; Schildhauer, M.; Fox, P. A.

    2013-12-01

    Software is essential to the full science lifecycle, spanning data acquisition, processing, quality assessment, data integration, analysis, modeling, and visualization. Software runs our meteorological sensor systems, our data loggers, and our ocean gliders. Every aspect of science is impacted by, and improved by, software. Scientific advances ranging from modeling climate change to the sequencing of the human genome have been rendered possible in the last few decades due to the massive improvements in the capabilities of computers to process data through software. This pivotal role of software in science is broadly acknowledged, while simultaneously being systematically undervalued through minimal investments in maintenance and innovation. As a community, we need to embrace the creation, use, and maintenance of software within science, and address problems such as code complexity, openness,reproducibility, and accessibility. We also need to fully develop new skills and practices in software engineering as a core competency in our earth science disciplines, starting with undergraduate and graduate education and extending into university and agency professional positions. The Institute for Sustainable Earth and Environmental Software (ISEES) is being envisioned as a community-driven activity that can facilitate and galvanize activites around scientific software in an analogous way to synthesis centers such as NCEAS and NESCent that have stimulated massive advances in ecology and evolution. We will describe the results of six workshops (Science Drivers, Software Lifecycles, Software Components, Workforce Development and Training, Sustainability and Governance, and Community Engagement) that have been held in 2013 to envision such an institute. We will present community recommendations from these workshops and our strategic vision for how ISEES will address the technical issues in the software lifecycle, sustainability of the whole software ecosystem, and the critical issue of computational training for the scientific community. Process for envisioning ISEES.

  2. Networking Cyberinfrastructure Resources to Support Global, Cross-disciplinary Science

    NASA Astrophysics Data System (ADS)

    Lehnert, K.; Ramamurthy, M. K.

    2016-12-01

    Geosciences are globally connected by nature and the grand challenge problems like climate change, ocean circulations, seasonal predictions, impact of volcanic eruptions, etc. all transcend both disciplinary and geographic boundaries, requiring cross-disciplinary and international partnerships. Cross-disciplinary and international collaborations are also needed to unleash the power of cyber- (or e-) infrastructure (CI) by networking globally distributed, multi-disciplinary data, software, and computing resources to accelerate new scientific insights and discoveries. While the promises of a global and cross-disciplinary CI are exhilarating and real, a range of technical, organizational, and social challenges needs to be overcome in order to achieve alignment and linking of operational data systems, software tools, and computing facilities. New modes of collaboration require agreement on and governance of technical standards and best practices, and funding for necessary modifications. This presentation will contribute the perspective of domain-specific data facilities to the discussion of cross-disciplinary and international collaboration in CI development and deployment, in particular those of IEDA (Interdisciplinary Earth Data Alliance) serving the solid Earth sciences and Unidata serving atmospheric sciences. Both facilities are closely involved with the US NSF EarthCube program that aims to network and augment existing Geoscience CI capabilities "to make disciplinary boundaries permeable, nurture and facilitate knowledge sharing, …, and enhance collaborative pursuit of cross-disciplinary research" (EarthCube Strategic Vision), while also collaborating internationally to network domain-specific and cross-disciplinary CI resources. These collaborations are driven by the substantial benefits to the science community, but create challenges, when operational and funding constraints need to be balanced with adjustments to new joint data curation practices and interoperability standards.

  3. Optimization of the RF cavity heat load and trip rates for CEBAF at 12 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, He; Roblin, Yves R.; Freyberger, Arne P.

    2017-05-01

    The Continuous Electron Beam Accelerator Facility at JLab has 200 RF cavities in the north linac and the south linac respectively after the 12 GeV upgrade. The purpose of this work is to simultaneously optimize the heat load and the trip rate for the cavities and to reconstruct the pareto-optimal front in a timely manner when some of the cavities are turned down. By choosing an efficient optimizer and strategically creating the initial gradients, the pareto-optimal front for no more than 15 cavities down can be re-established within 20 seconds.

  4. Historical review of tactical missile airframe developments

    NASA Technical Reports Server (NTRS)

    Spearman, M. L.

    1992-01-01

    A comprehensive development history of missile airframe aerodynamics is presented, encompassing ground-, ground vehicle-, ship-, and air-launched categories of all ranges short of strategic. Emphasis is placed on the swift acceleration of missile configuration aerodynamics by German researchers in the course of the Second World War and by U.S. research establishments thereafter, often on the foundations laid by German workers. Examples are given of foundational airframe design criteria established by systematic researches undertaken in the 1950s, regarding L/D ratios, normal force and pitching moment characteristics, minimum drag forebodies and afterbodies, and canard and delta winged configuration aerodynamics.

  5. Atomic and close-to-atomic scale manufacturing—A trend in manufacturing development

    NASA Astrophysics Data System (ADS)

    Fang, Fengzhou

    2016-12-01

    Manufacturing is the foundation of a nation's economy. It is the primary industry to promote economic and social development. To accelerate and upgrade China's manufacturing sector from "precision manufacturing" to "high-performance and high-quality manufacturing", a new breakthrough should be found in terms of achieving a "leap-frog development". Unlike conventional manufacturing, the fundamental theory of "Manufacturing 3.0" is beyond the scope of conventional theory; rather, it is based on new principles and theories at the atomic and/or closeto- atomic scale. Obtaining a dominant role at the international level is a strategic move for China's progress.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovelace, III, Henry H.

    In accelerator physics, models of a given machine are used to predict the behaviors of the beam, magnets, and radiofrequency cavities. The use of the computational model has become wide spread to ease the development period of the accelerator lattice. There are various programs that are used to create lattices and run simulations of both transverse and longitudinal beam dynamics. The programs include Methodical Accelerator Design(MAD) MAD8, MADX, Zgoubi, Polymorphic Tracking Code (PTC), and many others. In this discussion the BMAD (Baby Methodical Accelerator Design) is presented as an additional tool in creating and simulating accelerator lattices for the studymore » of beam dynamics in the Relativistic Heavy Ion Collider (RHIC).« less

  7. Fine-grained parallelism accelerating for RNA secondary structure prediction with pseudoknots based on FPGA.

    PubMed

    Xia, Fei; Jin, Guoqing

    2014-06-01

    PKNOTS is a most famous benchmark program and has been widely used to predict RNA secondary structure including pseudoknots. It adopts the standard four-dimensional (4D) dynamic programming (DP) method and is the basis of many variants and improved algorithms. Unfortunately, the O(N(6)) computing requirements and complicated data dependency greatly limits the usefulness of PKNOTS package with the explosion in gene database size. In this paper, we present a fine-grained parallel PKNOTS package and prototype system for accelerating RNA folding application based on FPGA chip. We adopted a series of storage optimization strategies to resolve the "Memory Wall" problem. We aggressively exploit parallel computing strategies to improve computational efficiency. We also propose several methods that collectively reduce the storage requirements for FPGA on-chip memory. To the best of our knowledge, our design is the first FPGA implementation for accelerating 4D DP problem for RNA folding application including pseudoknots. The experimental results show a factor of more than 50x average speedup over the PKNOTS-1.08 software running on a PC platform with Intel Core2 Q9400 Quad CPU for input RNA sequences. However, the power consumption of our FPGA accelerator is only about 50% of the general-purpose micro-processors.

  8. Warp-X: A new exascale computing platform for beam–plasma simulations

    DOE PAGES

    Vay, J. -L.; Almgren, A.; Bell, J.; ...

    2018-01-31

    Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less

  9. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.

  10. Hardware-software face detection system based on multi-block local binary patterns

    NASA Astrophysics Data System (ADS)

    Acasandrei, Laurentiu; Barriga, Angel

    2015-03-01

    Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.

  11. GPU accelerated manifold correction method for spinning compact binaries

    NASA Astrophysics Data System (ADS)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  12. Warp-X: A new exascale computing platform for beam–plasma simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, J. -L.; Almgren, A.; Bell, J.

    Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less

  13. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    NASA Technical Reports Server (NTRS)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  14. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  15. Accelerating separable footprint (SF) forward and back projection on GPU

    NASA Astrophysics Data System (ADS)

    Xie, Xiaobin; McGaffin, Madison G.; Long, Yong; Fessler, Jeffrey A.; Wen, Minhua; Lin, James

    2017-03-01

    Statistical image reconstruction (SIR) methods for X-ray CT can improve image quality and reduce radiation dosages over conventional reconstruction methods, such as filtered back projection (FBP). However, SIR methods require much longer computation time. The separable footprint (SF) forward and back projection technique simplifies the calculation of intersecting volumes of image voxels and finite-size beams in a way that is both accurate and efficient for parallel implementation. We propose a new method to accelerate the SF forward and back projection on GPU with NVIDIA's CUDA environment. For the forward projection, we parallelize over all detector cells. For the back projection, we parallelize over all 3D image voxels. The simulation results show that the proposed method is faster than the acceleration method of the SF projectors proposed by Wu and Fessler.13 We further accelerate the proposed method using multiple GPUs. The results show that the computation time is reduced approximately proportional to the number of GPUs.

  16. Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2017-03-01

    The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.

  17. Battle Environment Assessment for Commanders: A Concept of Support for Joint and Component Strategy and Operations.

    DTIC Science & Technology

    1988-03-14

    focused application of decision aids. These decision aids must incorporate standardized processes, computer assisted artificial intelligence, linkage...Theater Planning. A Strategic-Operational Perspective,’ by COL MIke ,or i n Olesak, John, LTC Office of the Deputy Chief of Staff, Inteligence , U S

  18. Virtual Team Effectiveness: An Empirical Examination of the Use of Communication Technologies on Trust and Virtual Team Performance

    ERIC Educational Resources Information Center

    Thomas, Valerie Brown

    2010-01-01

    Ubiquitous technology and agile organizational structures have enabled a strategic response to increasingly competitive, complex, and unpredictable challenges faced by many organizations. Using cyberinfrastructure, which is primarily the network of information, computers, communication technologies, and people, traditional organizations have…

  19. Adults, Computers and Problem Solving: "What's the Problem?" OECD Skills Studies

    ERIC Educational Resources Information Center

    Chung, Ji Eun; Elliott, Stuart

    2015-01-01

    The "OECD Skills Studies" series aims to provide a strategic approach to skills policies. It presents OECD internationally comparable indicators and policy analysis covering issues such as: quality of education and curricula; transitions from school to work; vocational education and training (VET); employment and unemployment; innovative…

  20. Collins Center Update. Volume 4, Issue 3, April-June 2002

    DTIC Science & Technology

    2002-06-01

    a free play , computer-assisted war game. The objective of JLASS is to promote the joint professional military education of all participants by...gaming phase, they came together to execute their plans in a dynamic free play environment. A Center for Strategic Leadership spon- sored elective

  1. Three Essays on the Economics of Information Systems

    ERIC Educational Resources Information Center

    Jian, Lian

    2010-01-01

    My dissertation contains three studies centering on the question: how to motivate people to share high quality information on online information aggregation systems, also known as social computing systems? I take a social scientific approach to "identify" the strategic behavior of individuals in information systems, and "analyze" how non-monetary…

  2. Chief Information Officers: New and Continuing Issues.

    ERIC Educational Resources Information Center

    Edutech Report, 1988

    1988-01-01

    Examines the functions of chief information officers on college campuses, and describes three major categories that the functions fall into, depending on the nature of computing within the institution; i.e., information technology as (1) a strategic resource, (2) an aid to operations, and (3) a source of confusion. (CLB)

  3. Workshop Report: The Future of ROK Navy-US Navy Cooperation

    DTIC Science & Technology

    2007-10-01

    vulnerability also increases. Cyber attacks to paralyze information and communication systems through hacking , virus attacks on computers, and jamming...Pacific Fleet (N5) 250 Makalpa Dr. Pearl Harbor, HI 96860-7000 Ms. Ariane L. Whitemore Chief of Staff, Strategic Planning and Policy HQ USPACOM/ J5 COS

  4. Fermilab | Science at Fermilab | Experiments & Projects | Intensity

    Science.gov Websites

    Theory Computing High-performance Computing Grid Computing Networking Mass Storage Plan for the Future List Historic Results Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library Visual Media Services Timeline History High-Energy Physics Accelerator

  5. Computer Diagnostics.

    ERIC Educational Resources Information Center

    Tondow, Murray

    The report deals with the influence of computer technology on education, particularly guidance. The need for computers is a result of increasing complexity which is defined as: (1) an exponential increase of information; (2) an exponential increase in dissemination capabilities; and (3) an accelerating curve of change. Listed are five functions of…

  6. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    PubMed

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  7. GPU acceleration of a petascale application for turbulent mixing at high Schmidt number using OpenMP 4.5

    NASA Astrophysics Data System (ADS)

    Clay, M. P.; Buaria, D.; Yeung, P. K.; Gotoh, T.

    2018-07-01

    This paper reports on the successful implementation of a massively parallel GPU-accelerated algorithm for the direct numerical simulation of turbulent mixing at high Schmidt number. The work stems from a recent development (Comput. Phys. Commun., vol. 219, 2017, 313-328), in which a low-communication algorithm was shown to attain high degrees of scalability on the Cray XE6 architecture when overlapping communication and computation via dedicated communication threads. An even higher level of performance has now been achieved using OpenMP 4.5 on the Cray XK7 architecture, where on each node the 16 integer cores of an AMD Interlagos processor share a single Nvidia K20X GPU accelerator. In the new algorithm, data movements are minimized by performing virtually all of the intensive scalar field computations in the form of combined compact finite difference (CCD) operations on the GPUs. A memory layout in departure from usual practices is found to provide much better performance for a specific kernel required to apply the CCD scheme. Asynchronous execution enabled by adding the OpenMP 4.5 NOWAIT clause to TARGET constructs improves scalability when used to overlap computation on the GPUs with computation and communication on the CPUs. On the 27-petaflops supercomputer Titan at Oak Ridge National Laboratory, USA, a GPU-to-CPU speedup factor of approximately 5 is consistently observed at the largest problem size of 81923 grid points for the scalar field computed with 8192 XK7 nodes.

  8. Accelerating phylogenetics computing on the desktop: experiments with executing UPGMA in programmable logic.

    PubMed

    Davis, J P; Akella, S; Waddell, P H

    2004-01-01

    Having greater computational power on the desktop for processing taxa data sets has been a dream of biologists/statisticians involved in phylogenetics data analysis. Many existing algorithms have been highly optimized-one example being Felsenstein's PHYLIP code, written in C, for UPGMA and neighbor joining algorithms. However, the ability to process more than a few tens of taxa in a reasonable amount of time using conventional computers has not yielded a satisfactory speedup in data processing, making it difficult for phylogenetics practitioners to quickly explore data sets-such as might be done from a laptop computer. We discuss the application of custom computing techniques to phylogenetics. In particular, we apply this technology to speed up UPGMA algorithm execution by a factor of a hundred, against that of PHYLIP code running on the same PC. We report on these experiments and discuss how custom computing techniques can be used to not only accelerate phylogenetics algorithm performance on the desktop, but also on larger, high-performance computing engines, thus enabling the high-speed processing of data sets involving thousands of taxa.

  9. GPU-Accelerated Voxelwise Hepatic Perfusion Quantification

    PubMed Central

    Wang, H; Cao, Y

    2012-01-01

    Voxelwise quantification of hepatic perfusion parameters from dynamic contrast enhanced (DCE) imaging greatly contributes to assessment of liver function in response to radiation therapy. However, the efficiency of the estimation of hepatic perfusion parameters voxel-by-voxel in the whole liver using a dual-input single-compartment model requires substantial improvement for routine clinical applications. In this paper, we utilize the parallel computation power of a graphics processing unit (GPU) to accelerate the computation, while maintaining the same accuracy as the conventional method. Using CUDA-GPU, the hepatic perfusion computations over multiple voxels are run across the GPU blocks concurrently but independently. At each voxel, non-linear least squares fitting the time series of the liver DCE data to the compartmental model is distributed to multiple threads in a block, and the computations of different time points are performed simultaneously and synchronically. An efficient fast Fourier transform in a block is also developed for the convolution computation in the model. The GPU computations of the voxel-by-voxel hepatic perfusion images are compared with ones by the CPU using the simulated DCE data and the experimental DCE MR images from patients. The computation speed is improved by 30 times using a NVIDIA Tesla C2050 GPU compared to a 2.67 GHz Intel Xeon CPU processor. To obtain liver perfusion maps with 626400 voxels in a patient’s liver, it takes 0.9 min with the GPU-accelerated voxelwise computation, compared to 110 min with the CPU, while both methods result in perfusion parameters differences less than 10−6. The method will be useful for generating liver perfusion images in clinical settings. PMID:22892645

  10. Using Accelerated Reader with ESL Students.

    ERIC Educational Resources Information Center

    Hamilton, Betty

    1997-01-01

    Describes the use of Accelerated Reader, a computer program that instantly provides scored tests on a variety of books read by high school ESL (English as a Second Language) students as free voluntary reading. Topics include reading improvement programs, including writing assignments; and changes in students' reading habits. (LRW)

  11. Graphics Processing Unit-Accelerated Nonrigid Registration of MR Images to CT Images During CT-Guided Percutaneous Liver Tumor Ablations.

    PubMed

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko

    2015-06-01

    Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  12. Thinking strategically: academic-practice relationships: one health system's experience.

    PubMed

    Wurmser, Teri; Bliss-Holtz, Jane

    2011-01-01

    Strategic planning and joint leverage of the strengths inherent in the academic and practice arenas of nursing are imperative to confront the challenges facing the profession of nursing and its place within the healthcare team of the future. This article presents a description and discussion of the implementation of several academic-practice partnership initiatives by Meridian Health, a health system located in central New Jersey. Included in the strategies discussed are creation of a support program for nonprofessional employees to become registered nurses; active partnership in the development of an accelerated BSN program; construction of support systems and academic partnerships for staff participation in RN-to-BSN programs; construction of on-site clinical simulation laboratories to foster interprofessional learning; and the implementation of a new BSN program, the first and only generic BSN program in two counties of the state. Outcomes of these academic-practice partnerships also are presented, including number of participants; graduation and NCLEX-RN pass rates; MH nurse vacancy rates; and nurse retention rates after first employment. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Gold and electrum jewellery in the strategic area of Gadir in Phoenician period

    NASA Astrophysics Data System (ADS)

    Ortega-Feliu, I.; Gómez-Tubío, B.; Ontalba Salamanca, M. Á.; Respaldiza, M. Á.; de la Bandera, M. L.; Ovejero Zappino, G.

    2007-07-01

    A set of ancient gold jewellery was found in Cádiz (formerly Gadir, south Spain) in tombs dated in Phoenician-Archaic period (VII-VI century BC), and nowadays is exhibited in the local Museum. The production of this strategic area is of great interest for the knowledge of the commercial routes along the Mediterranean Sea at that time. Part of this production has already been analyzed by the authors, finding compositional differences and identifying soldering procedures, thanks to the use of the external microbeam. Absolutely non destructive analysis was performed. For this work we have again employed PIXE spectrometry with 2.2 MeV protons from the 3 MV Pelletron accelerator at the CNA to characterize the metallic alloys and the manufacturing techniques. We have found an unusual composition characterized by around 50 wt.% gold, 50 wt.% silver and some copper, which can be identified as ELECTRUM. Few analytical data of this particular kind of alloy are reported in the bibliography. The study of these objects can help to follow the trade of metals in the Phoenician-colonial period.

  14. Investigation of accelerating ion triode with magnetic insulation for neutron generation

    NASA Astrophysics Data System (ADS)

    Shikanov, A. E.; Kozlovskij, K. I.; Vovchenko, E. D.; Rashchikov, V. I.; Shatokhin, V. L.; Isaev, A. A.

    2017-12-01

    Vacuum accelerating tube (AT) for neutron generation with the secondary electron emission suppressed by helical line pulse magnetic field which allocated inside accelerating gap in front of hollow conical cathodeis discussed. The central anode was covered by the hollow cathode. This technical solution of AT is an ion triode in which helical line serve as a grid. Computer simulation results of longitudinal magnetic field distributional along the axis are presented.

  15. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  16. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    PubMed Central

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-01-01

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606

  17. Higher-order ice-sheet modelling accelerated by multigrid on graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian; Egholm, David

    2013-04-01

    Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.

  18. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    PubMed

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  19. Multi-Mode Cavity Accelerator Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Yong; Hirshfield, Jay Leonard

    2016-11-10

    This project aimed to develop a prototype for a novel accelerator structure comprising coupled cavities that are tuned to support modes with harmonically-related eigenfrequencies, with the goal of reaching an acceleration gradient >200 MeV/m and a breakdown rate <10 -7/pulse/meter. Phase I involved computations, design, and preliminary engineering of a prototype multi-harmonic cavity accelerator structure; plus tests of a bimodal cavity. A computational procedure was used to design an optimized profile for a bimodal cavity with high shunt impedance and low surface fields to maximize the reduction in temperature rise ΔT. This cavity supports the TM010 mode and its 2ndmore » harmonic TM011 mode. Its fundamental frequency is at 12 GHz, to benchmark against the empirical criteria proposed within the worldwide High Gradient collaboration for X-band copper structures; namely, a surface electric field E sur max< 260 MV/m and pulsed surface heating ΔT max< 56 °K. With optimized geometry, amplitude and relative phase of the two modes, reductions are found in surface pulsed heating, modified Poynting vector, and total RF power—as compared with operation at the same acceleration gradient using only the fundamental mode.« less

  20. A Unified Computational Model for Solar and Stellar Flares

    NASA Technical Reports Server (NTRS)

    Allred, Joel C.; Kowalski, Adam F.; Carlsson, Mats

    2015-01-01

    We present a unified computational framework that can be used to describe impulsive flares on the Sun and on dMe stars. The models assume that the flare impulsive phase is caused by a beam of charged particles that is accelerated in the corona and propagates downward depositing energy and momentum along the way. This rapidly heats the lower stellar atmosphere causing it to explosively expand and dramatically brighten. Our models consist of flux tubes that extend from the sub-photosphere into the corona. We simulate how flare-accelerated charged particles propagate down one-dimensional flux tubes and heat the stellar atmosphere using the Fokker-Planck kinetic theory. Detailed radiative transfer is included so that model predictions can be directly compared with observations. The flux of flare-accelerated particles drives return currents which additionally heat the stellar atmosphere. These effects are also included in our models. We examine the impact of the flare-accelerated particle beams on model solar and dMe stellar atmospheres and perform parameter studies varying the injected particle energy spectra. We find the atmospheric response is strongly dependent on the accelerated particle cutoff energy and spectral index.

  1. Metocognitive Support Accelerates Computer Assisted Learning for Novice Programmers

    ERIC Educational Resources Information Center

    Rum, Siti Nurulain Mohd; Ismail, Maizatul Akmar

    2017-01-01

    Computer programming is a part of the curriculum in computer science education, and high drop rates for this subject are a universal problem. Development of metacognitive skills, including the conceptual framework provided by socio-cognitive theories that afford reflective thinking, such as actively monitoring, evaluating, and modifying one's…

  2. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less

  3. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  4. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  5. CPU-GPU hybrid accelerating the Zuker algorithm for RNA secondary structure prediction applications

    PubMed Central

    2012-01-01

    Background Prediction of ribonucleic acid (RNA) secondary structure remains one of the most important research areas in bioinformatics. The Zuker algorithm is one of the most popular methods of free energy minimization for RNA secondary structure prediction. Thus far, few studies have been reported on the acceleration of the Zuker algorithm on general-purpose processors or on extra accelerators such as Field Programmable Gate-Array (FPGA) and Graphics Processing Units (GPU). To the best of our knowledge, no implementation combines both CPU and extra accelerators, such as GPUs, to accelerate the Zuker algorithm applications. Results In this paper, a CPU-GPU hybrid computing system that accelerates Zuker algorithm applications for RNA secondary structure prediction is proposed. The computing tasks are allocated between CPU and GPU for parallel cooperate execution. Performance differences between the CPU and the GPU in the task-allocation scheme are considered to obtain workload balance. To improve the hybrid system performance, the Zuker algorithm is optimally implemented with special methods for CPU and GPU architecture. Conclusions Speedup of 15.93× over optimized multi-core SIMD CPU implementation and performance advantage of 16% over optimized GPU implementation are shown in the experimental results. More than 14% of the sequences are executed on CPU in the hybrid system. The system combining CPU and GPU to accelerate the Zuker algorithm is proven to be promising and can be applied to other bioinformatics applications. PMID:22369626

  6. Synchronous acceleration with tapered dielectric-lined waveguides

    NASA Astrophysics Data System (ADS)

    Lemery, F.; Floettmann, K.; Piot, P.; Kärtner, F. X.; Aßmann, R.

    2018-05-01

    We present a general concept to accelerate nonrelativistic charged particles. Our concept employs an adiabatically-tapered dielectric-lined waveguide which supports accelerating phase velocities for synchronous acceleration. We propose an ansatz for the transient field equations, show it satisfies Maxwell's equations under an adiabatic approximation and find excellent agreement with a finite-difference time-domain computer simulation. The fields were implemented into the particle-tracking program astra and we present beam dynamics results for an accelerating field with a 1-mm-wavelength and peak electric field of 100 MV /m . Numerical simulations indicate that a ˜200 -keV electron beam can be accelerated to an energy of ˜10 MeV over ˜10 cm with parameters of interest to a wide range of applications including, e.g., future advanced accelerators, and ultra-fast electron diffraction.

  7. GPU accelerated FDTD solver and its application in MRI.

    PubMed

    Chi, J; Liu, F; Jin, J; Mason, D G; Crozier, S

    2010-01-01

    The finite difference time domain (FDTD) method is a popular technique for computational electromagnetics (CEM). The large computational power often required, however, has been a limiting factor for its applications. In this paper, we will present a graphics processing unit (GPU)-based parallel FDTD solver and its successful application to the investigation of a novel B1 shimming scheme for high-field magnetic resonance imaging (MRI). The optimized shimming scheme exhibits considerably improved transmit B(1) profiles. The GPU implementation dramatically shortened the runtime of FDTD simulation of electromagnetic field compared with its CPU counterpart. The acceleration in runtime has made such investigation possible, and will pave the way for other studies of large-scale computational electromagnetic problems in modern MRI which were previously impractical.

  8. Accelerating Alzheimer's disease drug innovations from the research pipeline to patients.

    PubMed

    Goldman, Dana P; Fillit, Howard; Neumann, Peter

    2018-03-23

    In June 2017, a diverse group of experts in Alzheimer's disease convened to discuss how to accelerate getting new drugs to patients to both prevent and treat the disease. Participants concluded that we need a more robust, diversified drug development pipeline. Strategic policy measures can help keep new Alzheimer's disease therapies (whether to treat symptoms, prevent onset, or cure) affordable for patients while supporting innovation and facilitating greater information sharing among payers, providers, researchers, and the public, including a postmarket surveillance study system, disease registries, innovative payment approaches, harmonizing federal agency review requirements, allowing conditional coverage for promising therapeutics and technology while additional data are collected, and opening up channels for drug companies to communicate with payers (and each other) about data and outcomes. To combat reimbursement issues, policy makers should address the latency time between potential treatment-which may be costly and fall on private payers-and societal benefits that accrue elsewhere. Copyright © 2018 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  9. Engaging local industry in the development of basic research infrastructure and instrumentation – The case of HIE-ISOLDE and ESS Scandinavia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahlander, Claes, E-mail: claes.fahlander@nuclear.lu.se

    Two world-class research facilities, the European Spallation Source, ESS, and the light-source facility MAX-IV, are being built in southern Sweden. They will primarily, when completed, be used for research in the fields of material sciences, life sciences, medicine and pharmacology. Their construction and the operation and maintenance of them for many years will create new business opportunities for companies in Europe in general and in Sweden, Denmark and Norway in particular in many different sectors. A project, CATE, Cluster for Accelerator Technology, was set up with the aim to strengthen the skills of companies in the Öresund-Kattegat-Skagerrak region in Scandinaviamore » in the field of accelerator technology such that they will become competitive and be able to take advantage of the potential of these two research facilities. CATE was strategically important and has helped to create partnerships between companies and new business opportunities in the region.« less

  10. Engaging local industry in the development of basic research infrastructure and instrumentation - The case of HIE-ISOLDE and ESS Scandinavia

    NASA Astrophysics Data System (ADS)

    Fahlander, Claes

    2016-07-01

    Two world-class research facilities, the European Spallation Source, ESS, and the light-source facility MAX-IV, are being built in southern Sweden. They will primarily, when completed, be used for research in the fields of material sciences, life sciences, medicine and pharmacology. Their construction and the operation and maintenance of them for many years will create new business opportunities for companies in Europe in general and in Sweden, Denmark and Norway in particular in many different sectors. A project, CATE, Cluster for Accelerator Technology, was set up with the aim to strengthen the skills of companies in the Öresund-Kattegat-Skagerrak region in Scandinavia in the field of accelerator technology such that they will become competitive and be able to take advantage of the potential of these two research facilities. CATE was strategically important and has helped to create partnerships between companies and new business opportunities in the region.

  11. Adaptive Biomedical Innovation: Evolving Our Global System to Sustainably and Safely Bring New Medicines to Patients in Need

    PubMed Central

    Trusheim, M; Cobbs, E; Bala, M; Garner, S; Hartman, D; Isaacs, K; Lumpkin, M; Lim, R; Oye, K; Pezalla, E; Saltonstall, P; Selker, H

    2016-01-01

    The current system of biomedical innovation is unable to keep pace with scientific advancements. We propose to address this gap by reengineering innovation processes to accelerate reliable delivery of products that address unmet medical needs. Adaptive biomedical innovation (ABI) provides an integrative, strategic approach for process innovation. Although the term “ABI” is new, it encompasses fragmented “tools” that have been developed across the global pharmaceutical industry, and could accelerate the evolution of the system through more coordinated application. ABI involves bringing stakeholders together to set shared objectives, foster trust, structure decision‐making, and manage expectations through rapid‐cycle feedback loops that maximize product knowledge and reduce uncertainty in a continuous, adaptive, and sustainable learning healthcare system. Adaptive decision‐making, a core element of ABI, provides a framework for structuring decision‐making designed to manage two types of uncertainty – the maturity of scientific and clinical knowledge, and the behaviors of other critical stakeholders. PMID:27626610

  12. Extreme Light Infrastructure - Nuclear Physics Eli-Np Project

    NASA Astrophysics Data System (ADS)

    Gales, S.

    2015-06-01

    The development of high power lasers and the combination of such novel devices with accelerator technology has enlarged the science reach of many research fields, in particular High energy, Nuclear and Astrophysics as well as societal applications in Material Science, Nuclear Energy and Medicine. The European Strategic Forum for Research Infrastructures (ESFRI) has selected a proposal based on these new premises called "ELI" for Extreme Light Infrastructure. ELI will be built as a network of three complementary pillars at the frontier of laser technologies. The ELI-NP pillar (NP for Nuclear Physics) is under construction near Bucharest (Romania) and will develop a scientific program using two 10 PW class lasers and a Back Compton Scattering High Brilliance and Intense Low Energy Gamma Beam , a marriage of Laser and Accelerator technology at the frontier of knowledge. In the present paper, the technical description of the facility, the present status of the project as well as the science, applications and future perspectives will be discussed.

  13. Nuclear Science and Applications with the Next Generation of High-Power Lasers and Brilliant Low-Energy Gamma Beams at ELI-NP

    NASA Astrophysics Data System (ADS)

    Gales, S.; ELI-NP Team

    2015-10-01

    The development of high power lasers and the combination of such novel devices with accelerator technology has enlarged the science reach of many research fields, in particular High Energy, Nuclear and Astrophysics as well as societal applications in Material Science, Nuclear Energy and Medicine. The European Strategic Forum for Research Infrastructures (ESFRI) has selected a proposal based on these new premises called "ELI" for Extreme Light Infrastructure. ELI will be built as a network of three complementary pillars at the frontier of laser technologies. The ELI-NP pillar (NP for Nuclear Physics) is under construction near Bucharest (Romania) and will develop a scientific program using two 10 PW class lasers and a Back Compton Scattering High Brilliance and Intense Low Energy Gamma Beam, a marriage of Laser and Accelerator technology at the frontier of knowledge. In the present paper, the technical and scientific status of the project as well as the applications of the gamma source will be discussed.

  14. FPGA accelerator for protein secondary structure prediction based on the GOR algorithm

    PubMed Central

    2011-01-01

    Background Protein is an important molecule that performs a wide range of functions in biological systems. Recently, the protein folding attracts much more attention since the function of protein can be generally derived from its molecular structure. The GOR algorithm is one of the most successful computational methods and has been widely used as an efficient analysis tool to predict secondary structure from protein sequence. However, the execution time is still intolerable with the steep growth in protein database. Recently, FPGA chips have emerged as one promising application accelerator to accelerate bioinformatics algorithms by exploiting fine-grained custom design. Results In this paper, we propose a complete fine-grained parallel hardware implementation on FPGA to accelerate the GOR-IV package for 2D protein structure prediction. To improve computing efficiency, we partition the parameter table into small segments and access them in parallel. We aggressively exploit data reuse schemes to minimize the need for loading data from external memory. The whole computation structure is carefully pipelined to overlap the sequence loading, computing and back-writing operations as much as possible. We implemented a complete GOR desktop system based on an FPGA chip XC5VLX330. Conclusions The experimental results show a speedup factor of more than 430x over the original GOR-IV version and 110x speedup over the optimized version with multi-thread SIMD implementation running on a PC platform with AMD Phenom 9650 Quad CPU for 2D protein structure prediction. However, the power consumption is only about 30% of that of current general-propose CPUs. PMID:21342582

  15. Implementing Molecular Dynamics on Hybrid High Performance Computers - Three-Body Potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Yamada, Masako

    The use of coprocessors or accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power re- quirements. Hybrid high-performance computers, defined as machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. Although there has been extensive research into methods to efficiently use accelerators to improve the performance of molecular dynamics (MD) employing pairwise potential energy models, little is reported in the literature for models that includemore » many-body effects. 3-body terms are required for many popular potentials such as MEAM, Tersoff, REBO, AIREBO, Stillinger-Weber, Bond-Order Potentials, and others. Because the per-atom simulation times are much higher for models incorporating 3-body terms, there is a clear need for efficient algo- rithms usable on hybrid high performance computers. Here, we report a shared-memory force-decomposition for 3-body potentials that avoids memory conflicts to allow for a deterministic code with substantial performance improvements on hybrid machines. We describe modifications necessary for use in distributed memory MD codes and show results for the simulation of water with Stillinger-Weber on the hybrid Titan supercomputer. We compare performance of the 3-body model to the SPC/E water model when using accelerators. Finally, we demonstrate that our approach can attain a speedup of 5.1 with acceleration on Titan for production simulations to study water droplet freezing on a surface.« less

  16. Microwave and Electron Beam Computer Programs

    DTIC Science & Technology

    1988-06-01

    Research (ONR). SCRIBE was adapted by MRC from the Stanford Linear Accelerator Center Beam Trajectory Program, EGUN . oTIC NSECE Acc !,,o For IDL1C I...achieved with SCRIBE. It is a ver- sion of the Stanford Linear Accelerator (SLAC) code EGUN (Ref. 8), extensively modified by MRC for research on

  17. Object-oriented design for accelerator control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stok, P.D.V. van der; Berk, F. van den; Deckers, R.

    1994-02-01

    An object-oriented design for the distributed computer control system of the accelerator ring EUTERPE is presented. Because of the experimental nature of the ring, flexibility is of the utmost importance. The object-oriented principles have contributed considerably to the flexibility of the design incorporating multiple views, multi-level access and distributed surveillance.

  18. A GPU-accelerated immersive audio-visual framework for interaction with molecular dynamics using consumer depth sensors.

    PubMed

    Glowacki, David R; O'Connor, Michael; Calabró, Gaetano; Price, James; Tew, Philip; Mitchell, Thomas; Hyde, Joseph; Tew, David P; Coughtrie, David J; McIntosh-Smith, Simon

    2014-01-01

    With advances in computational power, the rapidly growing role of computational/simulation methodologies in the physical sciences, and the development of new human-computer interaction technologies, the field of interactive molecular dynamics seems destined to expand. In this paper, we describe and benchmark the software algorithms and hardware setup for carrying out interactive molecular dynamics utilizing an array of consumer depth sensors. The system works by interpreting the human form as an energy landscape, and superimposing this landscape on a molecular dynamics simulation to chaperone the motion of the simulated atoms, affecting both graphics and sonified simulation data. GPU acceleration has been key to achieving our target of 60 frames per second (FPS), giving an extremely fluid interactive experience. GPU acceleration has also allowed us to scale the system for use in immersive 360° spaces with an array of up to ten depth sensors, allowing several users to simultaneously chaperone the dynamics. The flexibility of our platform for carrying out molecular dynamics simulations has been considerably enhanced by wrappers that facilitate fast communication with a portable selection of GPU-accelerated molecular force evaluation routines. In this paper, we describe a 360° atmospheric molecular dynamics simulation we have run in a chemistry/physics education context. We also describe initial tests in which users have been able to chaperone the dynamics of 10-alanine peptide embedded in an explicit water solvent. Using this system, both expert and novice users have been able to accelerate peptide rare event dynamics by 3-4 orders of magnitude.

  19. Grand Challenges: High Performance Computing and Communications. The FY 1992 U.S. Research and Development Program.

    ERIC Educational Resources Information Center

    Federal Coordinating Council for Science, Engineering and Technology, Washington, DC.

    This report presents a review of the High Performance Computing and Communications (HPCC) Program, which has as its goal the acceleration of the commercial availability and utilization of the next generation of high performance computers and networks in order to: (1) extend U.S. technological leadership in high performance computing and computer…

  20. The Uses and Impacts of Mobile Computing Technology in Hot Spots Policing.

    PubMed

    Koper, Christopher S; Lum, Cynthia; Hibdon, Julie

    2015-12-01

    Recent technological advances have much potential for improving police performance, but there has been little research testing whether they have made police more effective in reducing crime. To study the uses and crime control impacts of mobile computing technology in the context of geographically focused "hot spots" patrols. An experiment was conducted using 18 crime hot spots in a suburban jurisdiction. Nine of these locations were randomly selected to receive additional patrols over 11 weeks. Researchers studied officers' use of mobile information technology (IT) during the patrols using activity logs and interviews. Nonrandomized subgroup and multivariate analyses were employed to determine if and how the effects of the patrols varied based on these patterns. Officers used mobile computing technology primarily for surveillance and enforcement (e.g., checking automobile license plates and running checks on people during traffic stops and field interviews), and they noted both advantages and disadvantages to its use. Officers did not often use technology for strategic problem-solving and crime prevention. Given sufficient (but modest) dosages, the extra patrols reduced crime at the hot spots, but this effect was smaller in places where officers made greater use of technology. Basic applications of mobile computing may have little if any direct, measurable impact on officers' ability to reduce crime in the field. Greater training and emphasis on strategic uses of IT for problem-solving and crime prevention, and greater attention to its behavioral effects on officers, might enhance its application for crime reduction. © The Author(s) 2016.

Top