Sample records for high-performance computing act

  1. Support Expressed in Congress for U.S. High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2004-06-01

    Advocates for a stronger U.S. position in high-performance computing-which could help with a number of grand challenges in the Earth sciences and other disciplines-hope that legislation recently introduced in the House of Representatives, and, will help to revitalize U.S. efforts. The High-Performance Computing Revitalization Act of 2004 would amend the earlier High-Performance Computing Act of 1991 (Public Law 102-194), which is partially credited with helping to strengthen U.S. capabilities in this area. The bill has the support of the Bush administration.

  2. High Performance Computing and Communications Act of 1991. Hearing Before the Subcommittee on Science, Technology, and Space of the Committee on Commerce, Science, and Transportation. One Hundred Second Congress, First Session on S. 272 To Provide for a Coordinated Federal Research Program To Ensure Continued United States Leadership in High-Performance Computing.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Commerce, Science, and Transportation.

    This hearing before the Senate Subcommittee on Science, Technology, and Space focuses on S. 272, the High-Performance Computing and Communications Act of 1991, a bill that provides for a coordinated federal research and development program to ensure continued U.S. leadership in this area. Performance computing is defined as representing the…

  3. High-Performance Computing Act of 1991. Report of the Senate Committee on Commerce, Science, and Transportation on S. 272. Senate, 102d Congress, 1st Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Commerce, Science, and Transportation.

    This report discusses Senate Bill no. 272, which provides for a coordinated federal research and development program to ensure continued U.S. leadership in high-performance computing. High performance computing is defined as representing the leading edge of technological advancement in computing, i.e., the most sophisticated computer chips, the…

  4. National High-Performance Computing and Networking Act. Report To Accompany S. 343, Senate, 102d Congess, 1st Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Energy and Natural Resources.

    The purpose of the bill (S. 343), as reported by the Senate Committee on Energy and Natural Resources, is to establish a federal commitment to the advancement of high-performance computing, improve interagency planning and coordination of federal high-performance computing and networking activities, authorize a national high-speed computer…

  5. H.R. 1757--High Performance Computing and High Speed Networking Applications Act of 1993. Hearings before the Subcommittee on Science of the Committee on Science, Space, and Technology. House of Representatives, One Hundred Third Congress, First Session (April 27, May 6, May 11, 1993).

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.

    This document contains the transcript of three hearings on the High Speed Performance Computing and High Speed Networking Applications Act of 1993 (H.R. 1757). The hearings were designed to obtain specific suggestions for improvements to the legislation and alternative or additional application areas that should be pursued. Testimony and prepared…

  6. High-Performance Computing Act of 1990: Report of the Senate Committee on Commerce, Science, and Transportation on S. 1067.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Commerce, Science, and Transportation.

    This committee report is intended to accompany S. 1067, a bill designed to provide for a coordinated federal research program in high-performance computing (HPC). The primary objective of the legislation is given as the acceleration of research, development, and application of the most advanced computing technology in research, education, and…

  7. Final Report Extreme Computing and U.S. Competitiveness DOE Award. DE-FG02-11ER26087/DE-SC0008764

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mustain, Christopher J.

    The Council has acted on each of the grant deliverables during the funding period. The deliverables are: (1) convening the Council’s High Performance Computing Advisory Committee (HPCAC) on a bi-annual basis; (2) broadening public awareness of high performance computing (HPC) and exascale developments; (3) assessing the industrial applications of extreme computing; and (4) establishing a policy and business case for an exascale economy.

  8. H.R. 656--The High Performance Computer Technology Act of 1991. Hearing before the Subcommittee on Science, and the Subcommittee on Technology and Competitiveness of the Committee on Science, Space, and Technology. U.S. House of Representatives, One Hundred Second Congess, First Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.

    This hearing focused on H. R. 656, companion bill of S. 272, which calls for high performance computing legislation. This is one of several initiatives to provide for a coordinated federal research program to ensure continued U.S. leadership in high performance computing. The bill authorizes the development of a National Research and Education…

  9. Integrated Application of Active Controls (IAAC) technology to an advanced subsonic transport project: Current and advanced act control system definition study, volume 1

    NASA Technical Reports Server (NTRS)

    Hanks, G. W.; Shomber, H. A.; Dethman, H. A.; Gratzer, L. B.; Maeshiro, A.; Gangsaas, D.; Blight, J. D.; Buchan, S. M.; Crumb, C. B.; Dorwart, R. J.

    1981-01-01

    An active controls technology (ACT) system architecture was selected based on current technology system elements and optimal control theory was evaluated for use in analyzing and synthesizing ACT multiple control laws. The system selected employs three redundant computers to implement all of the ACT functions, four redundant smaller computers to implement the crucial pitch-augmented stability function, and a separate maintenance and display computer. The reliability objective of probability of crucial function failure of less than 1 x 10 to the -9th power per flight of 1 hr can be met with current technology system components, if the software is assumed fault free and coverage approaching 1.0 can be provided. The optimal control theory approach to ACT control law synthesis yielded comparable control law performance much more systematically and directly than the classical s-domain approach. The ACT control law performance, although somewhat degraded by the inclusion of representative nonlinearities, remained quite effective. Certain high-frequency gust-load alleviation functions may require increased surface rate capability.

  10. High performance computing and communications: FY 1997 implementation plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A providesmore » a detailed look at HPCC Program activities within each agency.« less

  11. National High Performance Computer Technology Act of 1989. Hearings Before the Subcommittee on Science, Technology, and Space of the Committee on Commerce, Science, and Transportation. United States Senate, One Hundred First Congress, First Session (June 21, July 26, and September 15, 1989).

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Commerce, Science, and Transportation.

    This collection of statements focuses on Title 2 of S. 1067, which calls for the National Science Foundation to establish a National Research and Education Network (NREN) by 1996. This is one of several titles in a bill to provide for a coordinated federal research program to ensure continued U.S. leadership in high performance computing. The…

  12. A software control system for the ACTS high-burst-rate link evaluation terminal

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Daugherty, Elaine S.

    1991-01-01

    Control and performance monitoring of NASA's High Burst Rate Link Evaluation Terminal (HBR-LET) is accomplished by using several software control modules. Different software modules are responsible for controlling remote radio frequency (RF) instrumentation, supporting communication between a host and a remote computer, controlling the output power of the Link Evaluation Terminal and data display. Remote commanding of microwave RF instrumentation and the LET digital ground terminal allows computer control of various experiments, including bit error rate measurements. Computer communication allows system operators to transmit and receive from the Advanced Communications Technology Satellite (ACTS). Finally, the output power control software dynamically controls the uplink output power of the terminal to compensate for signal loss due to rain fade. Included is a discussion of each software module and its applications.

  13. Visualization of unsteady computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Haimes, Robert

    1994-11-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  14. Visualization of unsteady computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1994-01-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  15. Combining high performance simulation, data acquisition, and graphics display computers

    NASA Technical Reports Server (NTRS)

    Hickman, Robert J.

    1989-01-01

    Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors.

  16. Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology. Report to the President and Congress

    ERIC Educational Resources Information Center

    Executive Office of the President, 2010

    2010-01-01

    This report is prepared by the President's Council of Advisors on Science and Technology (PCAST) acting in its role as the President's Innovation and Technology Advisory Council (PITAC). This report fulfills PCAST's responsibilities under Executive Order 13539 and the High-Performance Computing Act of 1991 (Public Law 102-194) as amended by the…

  17. 78 FR 37875 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Bureau of the Fiscal Service...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-24

    ...: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching... above. SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988... computer matching involving the Federal government could be performed and adding certain protections for...

  18. 75 FR 54213 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Office of Personnel Management...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-03

    ... 1021 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer.... SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988 (Public Law (Pub... computer matching involving the Federal government could be performed and adding certain protections for...

  19. 26 CFR 31.3121(i)-3 - Computation of remuneration for service performed by an individual as a volunteer or volunteer...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... performed by an individual as a volunteer or volunteer leader within the meaning of the Peace Corps Act. 31... Provisions § 31.3121(i)-3 Computation of remuneration for service performed by an individual as a volunteer or volunteer leader within the meaning of the Peace Corps Act. In the case of an individual...

  20. 26 CFR 31.3121(i)-3 - Computation of remuneration for service performed by an individual as a volunteer or volunteer...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... performed by an individual as a volunteer or volunteer leader within the meaning of the Peace Corps Act. 31... Provisions § 31.3121(i)-3 Computation of remuneration for service performed by an individual as a volunteer or volunteer leader within the meaning of the Peace Corps Act. In the case of an individual...

  1. 26 CFR 31.3121(i)-3 - Computation of remuneration for service performed by an individual as a volunteer or volunteer...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... performed by an individual as a volunteer or volunteer leader within the meaning of the Peace Corps Act. 31... Provisions § 31.3121(i)-3 Computation of remuneration for service performed by an individual as a volunteer or volunteer leader within the meaning of the Peace Corps Act. In the case of an individual...

  2. 26 CFR 31.3121(i)-3 - Computation of remuneration for service performed by an individual as a volunteer or volunteer...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... performed by an individual as a volunteer or volunteer leader within the meaning of the Peace Corps Act. 31... Provisions § 31.3121(i)-3 Computation of remuneration for service performed by an individual as a volunteer or volunteer leader within the meaning of the Peace Corps Act. In the case of an individual...

  3. 26 CFR 31.3121(i)-3 - Computation of remuneration for service performed by an individual as a volunteer or volunteer...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... performed by an individual as a volunteer or volunteer leader within the meaning of the Peace Corps Act. 31... Provisions § 31.3121(i)-3 Computation of remuneration for service performed by an individual as a volunteer or volunteer leader within the meaning of the Peace Corps Act. In the case of an individual...

  4. Acting performance and flow state enhanced with sensory-motor rhythm neurofeedback comparing ecologically valid immersive VR and training screen scenarios.

    PubMed

    Gruzelier, John; Inoue, Atsuko; Smart, Roger; Steed, Anthony; Steffert, Tony

    2010-08-16

    Actors were trained in sensory-motor rhythm (SMR) neurofeedback interfaced with a computer rendition of a theatre auditorium. Enhancement of SMR led to changes in the lighting while inhibition of theta and high beta led to a reduction in intrusive audience noise. Participants were randomised to a virtual reality (VR) representation in a ReaCTor, with surrounding image projection seen through glasses, or to a 2D computer screen, which is the conventional neurofeedback medium. In addition there was a no-training comparison group. Acting performance was evaluated by three experts from both filmed, studio monologues and Hamlet excerpts on the stage of Shakespeare's Globe Theatre. Neurofeedback learning reached an asymptote earlier as did identification of the required mental state following training in the ReaCTor training compared with the computer screen, though groups reached the same asymptote. These advantages were paralleled by higher ratings of acting performance overall, well-rounded performance, and especially the creativity subscale including imaginative expression, conviction and characterisation. On the Flow State scales both neurofeedback groups scored higher than the no-training controls on self-ratings of sense of control, confidence and feeling at-one. This is the first demonstration of enhancement of artistic performance with eyes-open neurofeedback training, previously demonstrated only with eyes-closed slow-wave training. Efficacy is attributed to psychological engagement through the ecologically relevant learning context of the acting-space, putatively allowing transfer to the real world otherwise achieved with slow-wave training through imaginative visualisation. The immersive VR technology was more successful than a 2D rendition. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  5. 15 CFR Supplement No. 1 to Part 730 - Information Collection Requirements Under the Paperwork Reduction Act: OMB Control Numbers

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Report of Requests for Restrictive Trade Practice or Boycott—Single or Multiple Transactions part 760 and § 762.2(b). 0694-0013 Computers and Related Equipment EAR Supplement 2 to Part 748 part 774. 0694-0016... §§ 762.2(b) and 764.5. 0694-0073 Export Controls of High Performance Computers Supplement No. 2 to part...

  6. 15 CFR Supplement No. 1 to Part 730 - Information Collection Requirements Under the Paperwork Reduction Act: OMB Control Numbers

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Report of Requests for Restrictive Trade Practice or Boycott—Single or Multiple Transactions part 760 and § 762.2(b). 0694-0013 Computers and Related Equipment EAR Supplement 2 to Part 748 part 774. 0694-0016... §§ 762.2(b) and 764.5. 0694-0073 Export Controls of High Performance Computers Supplement No. 2 to part...

  7. 15 CFR Supplement No. 1 to Part 730 - Information Collection Requirements Under the Paperwork Reduction Act: OMB Control Numbers

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Report of Requests for Restrictive Trade Practice or Boycott—Single or Multiple Transactions part 760 and § 762.2(b). 0694-0013 Computers and Related Equipment EAR Supplement 2 to Part 748 part 774. 0694-0016... §§ 762.2(b) and 764.5. 0694-0073 Export Controls of High Performance Computers Supplement No. 2 to part...

  8. Tracking Students' Cognitive Processes during Program Debugging--An Eye-Movement Approach

    ERIC Educational Resources Information Center

    Lin, Yu-Tzu; Wu, Cheng-Chih; Hou, Ting-Yun; Lin, Yu-Chih; Yang, Fang-Ying; Chang, Chia-Hu

    2016-01-01

    This study explores students' cognitive processes while debugging programs by using an eye tracker. Students' eye movements during debugging were recorded by an eye tracker to investigate whether and how high- and low-performance students act differently during debugging. Thirty-eight computer science undergraduates were asked to debug two C…

  9. Predicting Development of Mathematical Word Problem Solving Across the Intermediate Grades

    PubMed Central

    Tolar, Tammy D.; Fuchs, Lynn; Cirino, Paul T.; Fuchs, Douglas; Hamlett, Carol L.; Fletcher, Jack M.

    2012-01-01

    This study addressed predictors of the development of word problem solving (WPS) across the intermediate grades. At beginning of 3rd grade, 4 cohorts of students (N = 261) were measured on computation, language, nonverbal reasoning skills, and attentive behavior and were assessed 4 times from beginning of 3rd through end of 5th grade on 2 measures of WPS at low and high levels of complexity. Language skills were related to initial performance at both levels of complexity and did not predict growth at either level. Computational skills had an effect on initial performance in low- but not high-complexity problems and did not predict growth at either level of complexity. Attentive behavior did not predict initial performance but did predict growth in low-complexity, whereas it predicted initial performance but not growth for high-complexity problems. Nonverbal reasoning predicted initial performance and growth for low-complexity WPS, but only growth for high-complexity WPS. This evidence suggests that although mathematical structure is fixed, different cognitive resources may act as limiting factors in WPS development when the WPS context is varied. PMID:23325985

  10. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Patrick

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  11. ACTS 118x: High Speed TCP Interoperability Testing

    NASA Technical Reports Server (NTRS)

    Brooks, David E.; Buffinton, Craig; Beering, Dave R.; Welch, Arun; Ivancic, William D.; Zernic, Mike; Hoder, Douglas J.

    1999-01-01

    With the recent explosion of the Internet and the enormous business opportunities available to communication system providers, great interest has developed in improving the efficiency of data transfer over satellite links using the Transmission Control Protocol (TCP) of the Internet Protocol (IP) suite. The NASA's ACTS experiments program initiated a series of TCP experiments to demonstrate scalability of TCP/IP and determine to what extent the protocol can be optimized over a 622 Mbps satellite link. Through partnerships with the government technology oriented labs, computer, telecommunication, and satellite industries NASA Glenn was able to: (1) promote the development of interoperable, high-performance TCP/IP implementations across multiple computing / operating platforms; (2) work with the satellite industry to answer outstanding questions regarding the use of standard protocols (TCP/IP and ATM) for the delivery of advanced data services, and for use in spacecraft architectures; and (3) conduct a series of TCP/IP interoperability tests over OC12 ATM over a satellite network in a multi-vendor environment using ACTS. The experiments' various network configurations and the results are presented.

  12. Center for Advanced Computational Technology

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  13. Wide-Area Network Resources for Teacher Education.

    ERIC Educational Resources Information Center

    Aust, Ronald

    A central feature of the High Performance Computing Act of 1991 is the establishment of a National Research and Education Network (NREN). The level of access that teachers and teacher educators will need to benefit from the NREN and the types of network resources that are most useful for educators are explored, along with design issues that are…

  14. Global information infrastructure.

    PubMed

    Lindberg, D A

    1994-01-01

    The High Performance Computing and Communications Program (HPCC) is a multiagency federal initiative under the leadership of the White House Office of Science and Technology Policy, established by the High Performance Computing Act of 1991. It has been assigned a critical role in supporting the international collaboration essential to science and to health care. Goals of the HPCC are to extend USA leadership in high performance computing and networking technologies; to improve technology transfer for economic competitiveness, education, and national security; and to provide a key part of the foundation for the National Information Infrastructure. The first component of the National Institutes of Health to participate in the HPCC, the National Library of Medicine (NLM), recently issued a solicitation for proposals to address a range of issues, from privacy to 'testbed' networks, 'virtual reality,' and more. These efforts will build upon the NLM's extensive outreach program and other initiatives, including the Unified Medical Language System (UMLS), MEDLARS, and Grateful Med. New Internet search tools are emerging, such as Gopher and 'Knowbots'. Medicine will succeed in developing future intelligent agents to assist in utilizing computer networks. Our ability to serve patients is so often restricted by lack of information and knowledge at the time and place of medical decision-making. The new technologies, properly employed, will also greatly enhance our ability to serve the patient.

  15. Particle Identification on an FPGA Accelerated Compute Platform for the LHCb Upgrade

    NASA Astrophysics Data System (ADS)

    Fäerber, Christian; Schwemmer, Rainer; Machen, Jonathan; Neufeld, Niko

    2017-07-01

    The current LHCb readout system will be upgraded in 2018 to a “triggerless” readout of the entire detector at the Large Hadron Collider collision rate of 40 MHz. The corresponding bandwidth from the detector down to the foreseen dedicated computing farm (event filter farm), which acts as the trigger, has to be increased by a factor of almost 100 from currently 500 Gb/s up to 40 Tb/s. The event filter farm will preanalyze the data and will select the events on an event by event basis. This will reduce the bandwidth down to a manageable size to write the interesting physics data to tape. The design of such a system is a challenging task, and the reason why different new technologies are considered and have to be investigated for the different parts of the system. For the usage in the event building farm or in the event filter farm (trigger), an experimental field programmable gate array (FPGA) accelerated computing platform is considered and, therefore, tested. FPGA compute accelerators are used more and more in standard servers such as for Microsoft Bing search or Baidu search. The platform we use hosts a general Intel CPU and a high-performance FPGA linked via the high-speed Intel QuickPath Interconnect. An accelerator is implemented on the FPGA. It is very likely that these platforms, which are built, in general, for high-performance computing, are also very interesting for the high-energy physics community. First, the performance results of smaller test cases performed at the beginning are presented. Afterward, a part of the existing LHCb RICH particle identification is tested and is ported to the experimental FPGA accelerated platform. We have compared the performance of the LHCb RICH particle identification running on a normal CPU with the performance of the same algorithm, which is running on the Xeon-FPGA compute accelerator platform.

  16. High performance cellular level agent-based simulation with FLAME for the GPU.

    PubMed

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  17. Advanced Communications Technology Satellite high burst rate link evaluation terminal experiment control and monitor software maintenance manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1992-01-01

    The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document. The EC&M Software Maintenance Manual, Version 1.0 (NASA-CR-189161) is a programmer's guide that describes current implementation of the EC&M software from a technical perspective. An overview of the EC&M software, computer algorithms, format representation, and computer hardware configuration are included in the manual.

  18. Computational modeling and molecular imprinting for the development of acrylic polymers with high affinity for bile salts.

    PubMed

    Yañez, Fernando; Chianella, Iva; Piletsky, Sergey A; Concheiro, Angel; Alvarez-Lorenzo, Carmen

    2010-02-05

    This work has focused on the rational development of polymers capable of acting as traps of bile salts. Computational modeling was combined with molecular imprinting technology to obtain networks with high affinity for cholate salts in aqueous medium. The screening of a virtual library of 18 monomers, which are commonly used for imprinted networks, identified N-(3-aminopropyl)-methacrylate hydrochloride (APMA.HCl), N,N-diethylamino ethyl methacrylate (DEAEM) and ethyleneglycol methacrylate phosphate (EGMP) as suitable functional monomers with medium-to-high affinity for cholic acid. The polymers were prepared with a fix cholic acid:functional monomer mole ratio of 1:4, but with various cross-linking densities. Compared to polymers prepared without functional monomer, both imprinted and non-imprinted microparticles showed a high capability to remove sodium cholate from aqueous medium. High affinity APMA-based particles even resembled the performance of commercially available cholesterol-lowering granules. The imprinting effect was evident in most of the networks prepared, showing that computational modeling and molecular imprinting can act synergistically to improve the performance of certain polymers. Nevertheless, both the imprinted and non-imprinted networks prepared with the best monomer (APMA.HCl) identified by the modeling demonstrated such high affinity for the template that the imprinting effect was less important. The fitting of adsorption isotherms to the Freundlich model indicated that, in general, imprinting increases the population of high affinity binding sites, except when the affinity of the functional monomer for the target molecule is already very high. The cross-linking density was confirmed as a key parameter that determines the accessibility of the binding points to sodium cholate. Materials prepared with 9% mol APMA and 91% mol cross-linker showed enough affinity to achieve binding levels of up to 0.4 mmol g(-1) (i.e., 170 mg g(-1)) under flow (1 mL min(-1)) of 0.2 mM sodium cholate solution. Copyright 2009 Elsevier B.V. All rights reserved.

  19. Utility of cervical spinal and abdominal computed tomography in diagnosing occult pneumothorax in patients with blunt trauma: Computed tomographic imaging protocol matters.

    PubMed

    Akoglu, Haldun; Akoglu, Ebru Unal; Evman, Serdar; Akoglu, Tayfun; Denizbasi, Arzu; Guneysel, Ozlem; Onur, Ozge; Onur, Ender

    2012-10-01

    Small pneumothoraces (PXs), which are not initially recognized with a chest x-ray film and diagnosed by a thoracic computed tomography (CT), are described as occult PX (OCPX). The objective of this study was to evaluate cervival spine (C-spine) and abdominal CT (ACT) for diagnosing OCPX and overt PX (OVPX). All patients with blunt trauma who presented consecutively to the emergency department during a 26-months period were included. Among all the chest CTs (CCTs) (6,155 patients) conducted during that period, 254 scans were confirmed to have a true PX. The findings in their C-spine CT and ACT were compared with the findings in CCTs. Among these patients, 254 had a diagnosis of PX confirmed with CCT. OCPXs were identified on the chest computed tomographic scan of 128 patients (70.3%), whereas OVPXs were evident in 54 patients (29.7%). Computed tomographic imaging of the C-spine was performed in 74% of patients with OCPX and 66.7% of patients with OVPX trauma. Only 45 (35.2%) cases of OCPX and 42 (77.8%) cases of OVPX were detected by C-spine CT. ACT was performed in almost all patients, and 121 (95.3%) of 127 of these correctly identified an existing OCPX. Sensitivity of C-spine CT and ACT was 35.1% and 96.5%, respectively; specificity was 100% and 100%, respectively. Almost all OCPXs, regardless of intrathoracic location, could be detected by ACT or by combining C-spine and abdominal computed tomographic screening for patients. If the junction of the first and second vertebra is used as the caudad extent, C-spine CT does not have sufficient power to diagnose more than a third of the cases. Diagnostic study, level III.

  20. Development of the Tensoral Computer Language

    NASA Technical Reports Server (NTRS)

    Ferziger, Joel; Dresselhaus, Eliot

    1996-01-01

    The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.

  1. 29 CFR 4.178 - Computation of hours worked.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... in which the employee is suffered or permitted to work whether or not required to do so, and all time... performance of a covered contract, a computation of their hours worked in each workweek when such work under... in which the employee is engaged in performing work on contracts subject to the Act. However, unless...

  2. High-Performance Computing and Visualization | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system

  3. Computer and laboratory simulation of interactions between spacecraft surfaces and charged-particle environments

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.

    1979-01-01

    Cases where the charged-particle environment acts on the spacecraft (e.g., spacecraft charging phenomena) and cases where a system on the spacecraft causes the interaction (e.g., high voltage space power systems) are considered. Both categories were studied in ground simulation facilities to understand the processes involved and to measure the pertinent parameters. Computer simulations are based on the NASA Charging Analyzer Program (NASCAP) code. Analytical models are developed in this code and verified against the experimental data. Extrapolation from the small test samples to space conditions are made with this code. Typical results from laboratory and computer simulations are presented for both types of interactions. Extrapolations from these simulations to performance in space environments are discussed.

  4. A computational cognitive model of self-efficacy and daily adherence in mHealth.

    PubMed

    Pirolli, Peter

    2016-12-01

    Mobile health (mHealth) applications provide an excellent opportunity for collecting rich, fine-grained data necessary for understanding and predicting day-to-day health behavior change dynamics. A computational predictive model (ACT-R-DStress) is presented and fit to individual daily adherence in 28-day mHealth exercise programs. The ACT-R-DStress model refines the psychological construct of self-efficacy. To explain and predict the dynamics of self-efficacy and predict individual performance of targeted behaviors, the self-efficacy construct is implemented as a theory-based neurocognitive simulation of the interaction of behavioral goals, memories of past experiences, and behavioral performance.

  5. When does a physical system compute?

    PubMed

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution . We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  6. When does a physical system compute?

    PubMed Central

    Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv

    2014-01-01

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245

  7. Best Manufacturing Practices: Report of Survey Conducted at UNISYS corporation Computer Systems Division, St. Paul, Minnesota

    DTIC Science & Technology

    1987-11-01

    assistance to the ATE test technicians by means of computer generated graphics on a 19" display terminal. The TEG presents colorized annotations on ACCA ...perform outstanding acts to meet goals. Savings and goals are auditable from reports, charts, SPC, and Oregon Matrix. COMPUTER-AIDED MANUFACTURING

  8. Assessment of medical communication skills by computer: assessment method and student experiences.

    PubMed

    Hulsman, R L; Mollema, E D; Hoos, A M; de Haes, J C J M; Donnison-Speijer, J D

    2004-08-01

    A computer-assisted assessment (CAA) program for communication skills designated ACT was developed using the objective structured video examination (OSVE) format. This method features assessment of cognitive scripts underlying communication behaviour, a broad range of communication problems covered in 1 assessment, highly standardised assessment and rating procedures, and large group assessments without complex organisation. The Academic Medical Centre (AMC) at the University of Amsterdam, the Netherlands. Aims To describe the development of the AMC Communication Test (ACT); to describe our experiences with the examination and rating procedures; to present test score descriptives, and to present the students' opinions of ACT. The ACT presents films on history taking, breaking bad news and shared decision making. Each film is accompanied by 3 types of short essay questions derived from our assessment model: "knows", "knows why/when" and "knows how". Evaluation questions about ACT were integrated into the assessment. Participants A total of 210 third year medical undergraduates were assessed. This study reports on the 110 (53%) students who completed all evaluation questions. Marking 210 examinations took about 17 days. The test score matched a normal distribution and showed a good level of discrimination of the students. About 75% passed the examination. Some support for the validity of our assessment model was found in the students' differential performance on the 3 types of questions. The ACT was well received. Student evaluations confirmed our efforts to develop realistic films that related well to the communication training programme. The ACT is a useful assessment method which complements interpersonal assessment methods for the evaluation of the medical communication skills of undergraduates.

  9. Implementation and Performance Exploration of a Cross-Genre Part of Speech Tagging Methodology to Determine Dialog Act Tags in the Chat Domain

    DTIC Science & Technology

    2010-09-01

    the flies.”) or a present tense verb when describing what an airplane does (“An airplane flies.”) This disambiguation is, in general, computationally...as part-of-speech and dialog-act tagging, and yet the volume of data created makes human analysis impractical. We present a cross-genre part-of...acceptable automatic dialog-act determination. Furthermore, we show that a simple naı̈ve Bayes classifier achieves the same performance in a fraction of

  10. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  11. Inexact hardware for modelling weather & climate

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, Tim

    2014-05-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.

  12. A view of Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.

  13. The Federal Networking and Information Technology Research and Development Program: Funding Issues and Activities

    DTIC Science & Technology

    2009-05-14

    11 Next Generation Internet Research Act of 1998...performance computing R&D and called for increased interagency planning and coordination. The second, the Next Generation Internet Research Act of...law is available at http://www.nitrd.gov/congressional/laws/pl_102-194.html. 19 Next Generation Internet Research Act of 1998, P.L. 105-305, 15 U.S.C

  14. 77 FR 43084 - Office of Federal High-Performance Green Buildings; Federal Buildings Personnel Training Act...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-23

    ... Federal High-Performance Green Buildings; Federal Buildings Personnel Training Act; Notification of... High- Performance Green Buildings, Office of Governmentwide Policy, General Services Administration... download from the Office of Federal High-Performance Green Building Web site Library at-- http://www.gsa...

  15. High-Performance Computing Systems and Operations | Computational Science |

    Science.gov Websites

    NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate

  16. ACTS: from ATLAS software towards a common track reconstruction software

    NASA Astrophysics Data System (ADS)

    Gumpert, C.; Salzburger, A.; Kiehn, M.; Hrdinka, J.; Calace, N.; ATLAS Collaboration

    2017-10-01

    Reconstruction of charged particles’ trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is developed with special emphasis on thread-safety to support parallel execution of the code and data structures are optimised for vectorisation to speed up linear algebra operations. The implementation is agnostic to the details of the detection technologies and magnetic field configuration which makes it applicable to many different experiments.

  17. The Gain of Resource Delegation in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.

  18. Design of a Production System for Cognitive Modeling #1. Technical Report 77-2.

    ERIC Educational Resources Information Center

    Anderson, John R.; Kline, Paul J.

    This report describes several of the design decisions underlying ACT, a production system model of human cognition. ACT can be considered a high level computer programming language as well as a theory of the cognitive mechanisms underlying human information processing. ACT design decisions were based on both psychological and artificial…

  19. High-Performance Computing User Facility | Computational Science | NREL

    Science.gov Websites

    User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access

  20. Advanced communications technology satellite high burst rate link evaluation terminal experiment control and monitor software user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1992-01-01

    The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document.

  1. High Performance Computing Meets Energy Efficiency - Continuum Magazine |

    Science.gov Websites

    NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data

  2. Neurobehaviorally Inspired ACT-R Model of Sleep Deprivation: Decreased Performance in Psychomotor Vigilance

    DTIC Science & Technology

    2006-08-01

    ABSTRACT This report describes how changes in architectural parameters in the Adaptive Control of Thought – Rational (ACT-R) can be used to understand...Computational Models of Cognition Cognitive architectures like the Adaptive Control of Thought – Rational (ACT-R) and Soar provide an alternative to...Belavkin (2001) to simulate the role of emotion in decision making; and by Ritter and colleagues (2004) to simulate pre-task appraisal and anxiety

  3. An Exploratory Case Study of Computer Use in a Primary School Mathematics Classroom: New Technology, New Pedagogy? Research: Information and Communication Technologies

    ERIC Educational Resources Information Center

    Hardman, Joanne

    2005-01-01

    Because computers potentially transform pedagogy, much has been made of their ability to impact positively on student performance, particularly in subjects such as mathematics and science. However, there is currently a dearth of research regarding exactly how the computer acts as a transformative tool in disadvantaged schools. Drawing on a…

  4. Inference of RhoGAP/GTPase regulation using single-cell morphological data from a combinatorial RNAi screen.

    PubMed

    Nir, Oaz; Bakal, Chris; Perrimon, Norbert; Berger, Bonnie

    2010-03-01

    Biological networks are highly complex systems, consisting largely of enzymes that act as molecular switches to activate/inhibit downstream targets via post-translational modification. Computational techniques have been developed to perform signaling network inference using some high-throughput data sources, such as those generated from transcriptional and proteomic studies, but comparable methods have not been developed to use high-content morphological data, which are emerging principally from large-scale RNAi screens, to these ends. Here, we describe a systematic computational framework based on a classification model for identifying genetic interactions using high-dimensional single-cell morphological data from genetic screens, apply it to RhoGAP/GTPase regulation in Drosophila, and evaluate its efficacy. Augmented by knowledge of the basic structure of RhoGAP/GTPase signaling, namely, that GAPs act directly upstream of GTPases, we apply our framework for identifying genetic interactions to predict signaling relationships between these proteins. We find that our method makes mediocre predictions using only RhoGAP single-knockdown morphological data, yet achieves vastly improved accuracy by including original data from a double-knockdown RhoGAP genetic screen, which likely reflects the redundant network structure of RhoGAP/GTPase signaling. We consider other possible methods for inference and show that our primary model outperforms the alternatives. This work demonstrates the fundamental fact that high-throughput morphological data can be used in a systematic, successful fashion to identify genetic interactions and, using additional elementary knowledge of network structure, to infer signaling relations.

  5. Extreme Scale Computing to Secure the Nation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D L; McGraw, J R; Johnson, J R

    2009-11-10

    Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national securitymore » requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be developed that replace phenomenology with increased levels of scientific understanding together with an accompanying quantification of uncertainty. These advanced codes will place significantly higher demands on the computing infrastructure than do the current 3D ASC codes. This article discusses not only the need for a future computing capability at the exascale for the SBSS program, but also considers high performance computing requirements for broader national security questions. For example, the increasing concern over potential nuclear terrorist threats demands a capability to assess threats and potential disablement technologies as well as a rapid forensic capability for determining a nuclear weapons design from post-detonation evidence (nuclear counterterrorism).« less

  6. High performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  7. 29 CFR 4.178 - Computation of hours worked.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Compliance with Compensation Standards § 4.178 Computation of hours worked. Since employees subject to the... such hours are adequately segregated, as indicated in § 4.179, compensation in accordance with the Act will be required for all hours of work in any workweek in which the employee performs any work in...

  8. Computational Science News | Computational Science | NREL

    Science.gov Websites

    -Cooled High-Performance Computing Technology at the ESIF February 28, 2018 NREL Launches New Website for High-Performance Computing System Users The National Renewable Energy Laboratory (NREL) Computational Science Center has launched a revamped website for users of the lab's high-performance computing (HPC

  9. Assessment & Commitment Tracking System (ACTS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryant, Robert A.; Childs, Teresa A.; Miller, Michael A.

    2004-12-20

    The ACTS computer code provides a centralized tool for planning and scheduling assessments, tracking and managing actions associated with assessments or that result from an event or condition, and "mining" data for reporting and analyzing information for improving performance. The ACTS application is designed to work with the MS SQL database management system. All database interfaces are written in SQL. The following software is used to develop and support the ACTS application: Cold Fusion HTML JavaScript Quest TOAD Microsoft Visual Source Safe (VSS) HTML Mailer for sending email Microsoft SQL Microsoft Internet Information Server

  10. The High-­Performing Preschool: Story Acting in Head Start Classrooms

    ERIC Educational Resources Information Center

    McNamee, Gillian Dowley

    2015-01-01

    "The High-Performing Preschool" takes readers into the lives of three- and four-year-old Head Start students during their first year of school and focuses on the centerpiece of their school day: story acting. In this activity, students act out stories from high-quality children's literature as well as stories dictated by their peers.…

  11. Design and Performance of the Acts Gigabit Satellite Network High Data-Rate Ground Station

    NASA Technical Reports Server (NTRS)

    Hoder, Doug; Kearney, Brian

    1995-01-01

    The ACTS High Data-Rate Ground stations were built to support the ACTS Gigabit Satellite Network (GSN). The ACTS GSN was designed to provide fiber-compatible SONET service to remote nodes and networks through a wideband satellite system. The ACTS satellite is unique in its extremely wide bandwidth, and electronically controlled spot beam antennas. This paper discusses the requirements, design and performance of the RF section of the ACTS High Data-Rate Ground Stations and constituent hardware. The ACTS transponder systems incorporate highly nonlinear hard limiting. This introduced a major complexity in to the design and subsequent modification of the ground stations. A discussion of the peculiarities of the A CTS spacecraft transponder system and their impact is included.

  12. High Performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions

    DTIC Science & Technology

    2016-08-30

    High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions A dedicated high-performance computer cluster was...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Computer cluster ...peer-reviewed journals: Final Report: High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions Report Title A dedicated

  13. 75 FR 63524 - Computer Matching and Privacy Protection Act of 1988; Report of Matching Program: RRB and State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-15

    ... RAILROAD RETIREMENT BOARD Computer Matching and Privacy Protection Act of 1988; Report of Matching... Railroad Retirement Act. SUMMARY: As required by the Computer Matching and Privacy Protection Act of [[Page...: Under certain circumstances, the Computer Matching and Privacy Protection Act of 1988, Public Law 100...

  14. Antennas for 20/30 GHz and beyond

    NASA Technical Reports Server (NTRS)

    Chen, C. Harry; Wong, William C.; Hamada, S. Jim

    1989-01-01

    Antennas of 20/30 GHz and higher frequency, due to the small wavelength, offer capabilities for many space applications. With the government-sponsored space programs (such as ACTS) in recent years, the industry has gone through the learning curve of designing and developing high-performance, multi-function antennas in this frequency range. Design and analysis tools (such as the computer modelling used in feedhorn design and reflector surface and thermal distortion analysis) are available. The components/devices (such as BFN's, weight modules, feedhorns and etc.) are space-qualified. The manufacturing procedures (such as reflector surface control) are refined to meet the stringent tolerance accompanying high frequencies. The integration and testing facilities (such as Near-Field range) also advance to facilitate precision assembling and performance verification. These capabilities, essential to the successful design and development of high-frequency spaceborne antennas, shall find more space applications (such as ESGP) than just communications.

  15. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  16. Correlation between Preoperative High Resolution Computed Tomography (CT) Findings with Surgical Findings in Chronic Otitis Media (COM) Squamosal Type.

    PubMed

    Karki, S; Pokharel, M; Suwal, S; Poudel, R

    Background The exact role of High resolution computed tomography (HRCT) temporal bone in preoperative assessment of Chronic suppurative otitis media atticoantral disease still remains controversial. Objective To evaluate the role of high resolution computed tomography temporal bone in Chronic suppurative otitis media atticoantral disease and to compare preoperative computed tomographic findings with intra-operative findings. Method Prospective, analytical study conducted among 65 patients with chronic suppurative otitis media atticoantral disease in Department of Radiodiagnosis, Kathmandu University Dhulikhel Hospital between January 2015 to July 2016. The operative findings were compared with results of imaging. The parameters of comparison were erosion of ossicles, scutum, facial canal, lateral semicircular canal, sigmoid and tegmen plate along with extension of disease to sinus tympani and facial recess. Sensitivity, specificity, negative predictive value, positive predictive values were calculated. Result High resolution computed tomography temporal bone offered sensitivity (Se) and specificity (Sp) of 100% for visualization of sigmoid and tegmen plate erosion. The performance of HRCT in detecting malleus (Se=100%, Sp=95.23%), incus (Se=100%,Sp=80.48%) and stapes (Se=96.55%, Sp=71.42%) erosion was excellent. It offered precise information about facial canal erosion (Se=100%, Sp=75%), scutum erosion (Se=100%, Sp=96.87%) and extension of disease to facial recess and sinus tympani (Se=83.33%,Sp=100%). high resolution computed tomography showed specificity of 100% for lateral semicircular canal erosion (Sp=100%) but with low sensitivity (Se=53.84%). Conclusion The findings of high resolution computed tomography and intra-operative findings were well comparable except for lateral semicircular canal erosion. high resolution computed tomography temporal bone acts as a road map for surgeon to identify the extent of disease, plan for appropriate procedure that is required and prepare for potential complications that can be encountered during surgery.

  17. Grand Challenges: High Performance Computing and Communications. The FY 1992 U.S. Research and Development Program.

    ERIC Educational Resources Information Center

    Federal Coordinating Council for Science, Engineering and Technology, Washington, DC.

    This report presents a review of the High Performance Computing and Communications (HPCC) Program, which has as its goal the acceleration of the commercial availability and utilization of the next generation of high performance computers and networks in order to: (1) extend U.S. technological leadership in high performance computing and computer…

  18. Characterization of the Protein Unfolding Processes Induced by Urea and Temperature

    PubMed Central

    Rocco, Alessandro Guerini; Mollica, Luca; Ricchiuto, Piero; Baptista, António M.; Gianazza, Elisabetta; Eberini, Ivano

    2008-01-01

    Correct folding is critical for the biological activities of proteins. As a contribution to a better understanding of the protein (un)folding problem, we studied the effect of temperature and of urea on peptostreptococcal Protein L destructuration. We performed standard molecular dynamics simulations at 300 K, 350 K, 400 K, and 480 K, both in 10 M urea and in water. Protein L followed at least two alternative unfolding pathways. Urea caused the loss of secondary structure acting preferentially on the β-sheets, while leaving the α-helices almost intact; on the contrary, high temperature preserved the β-sheets and led to a complete loss of the α-helices. These data suggest that urea and high temperature act through different unfolding mechanisms, and protein secondary motives reveal a differential sensitivity to various denaturant treatments. As further validation of our results, replica-exchange molecular dynamics simulations of the temperature-induced unfolding process in the presence of urea were performed. This set of simulations allowed us to compute the thermodynamical parameters of the process and confirmed that, in the configurational space of Protein L unfolding, both of the above pathways are accessible, although to a different relative extent. PMID:18065481

  19. Facilities | Integrated Energy Solutions | NREL

    Science.gov Websites

    strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed

  20. A Computer Based Program to Improve Reading and Mathematics Scores for High School Students.

    ERIC Educational Resources Information Center

    Bond, Carole L.; And Others

    A study examined the effect on reading achievement, mathematics achievement, and ACT scores when computer based instruction (CBI) was compressed into a 6-week period of time. In addition, the effects of learning style and receptive language deficits on these scores were studied. Computer based instruction is a primary source of instruction that…

  1. Students Upgrading through Computer and Career Education System Services (Project SUCCESS). Final Evaluation Report 1992-93. OER Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Office of Educational Research.

    Student Upgrading through Computer and Career Education System Services (Project SUCCESS) was an Elementary and Secondary Education Act Title VII-funded project in its third year of operation. Project SUCCESS served 460 students of limited English proficiency at two high schools in Brooklyn and one high school in Manhattan (New York City).…

  2. 75 FR 30839 - Privacy Act of 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-02

    ... of the Matching Program A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L.... 100-503, the Computer Matching and Privacy Protection Act (CMPPA) of 1988), the Office of Management... 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer Match No. 1048, IRS...

  3. iDASH: integrating data for analysis, anonymization, and sharing

    PubMed Central

    Bafna, Vineet; Boxwala, Aziz A; Chapman, Brian E; Chapman, Wendy W; Chaudhuri, Kamalika; Day, Michele E; Farcas, Claudiu; Heintzman, Nathaniel D; Jiang, Xiaoqian; Kim, Hyeoneui; Kim, Jihoon; Matheny, Michael E; Resnic, Frederic S; Vinterbo, Staal A

    2011-01-01

    iDASH (integrating data for analysis, anonymization, and sharing) is the newest National Center for Biomedical Computing funded by the NIH. It focuses on algorithms and tools for sharing data in a privacy-preserving manner. Foundational privacy technology research performed within iDASH is coupled with innovative engineering for collaborative tool development and data-sharing capabilities in a private Health Insurance Portability and Accountability Act (HIPAA)-certified cloud. Driving Biological Projects, which span different biological levels (from molecules to individuals to populations) and focus on various health conditions, help guide research and development within this Center. Furthermore, training and dissemination efforts connect the Center with its stakeholders and educate data owners and data consumers on how to share and use clinical and biological data. Through these various mechanisms, iDASH implements its goal of providing biomedical and behavioral researchers with access to data, software, and a high-performance computing environment, thus enabling them to generate and test new hypotheses. PMID:22081224

  4. iDASH: integrating data for analysis, anonymization, and sharing.

    PubMed

    Ohno-Machado, Lucila; Bafna, Vineet; Boxwala, Aziz A; Chapman, Brian E; Chapman, Wendy W; Chaudhuri, Kamalika; Day, Michele E; Farcas, Claudiu; Heintzman, Nathaniel D; Jiang, Xiaoqian; Kim, Hyeoneui; Kim, Jihoon; Matheny, Michael E; Resnic, Frederic S; Vinterbo, Staal A

    2012-01-01

    iDASH (integrating data for analysis, anonymization, and sharing) is the newest National Center for Biomedical Computing funded by the NIH. It focuses on algorithms and tools for sharing data in a privacy-preserving manner. Foundational privacy technology research performed within iDASH is coupled with innovative engineering for collaborative tool development and data-sharing capabilities in a private Health Insurance Portability and Accountability Act (HIPAA)-certified cloud. Driving Biological Projects, which span different biological levels (from molecules to individuals to populations) and focus on various health conditions, help guide research and development within this Center. Furthermore, training and dissemination efforts connect the Center with its stakeholders and educate data owners and data consumers on how to share and use clinical and biological data. Through these various mechanisms, iDASH implements its goal of providing biomedical and behavioral researchers with access to data, software, and a high-performance computing environment, thus enabling them to generate and test new hypotheses.

  5. High-Performance Computing Data Center Warm-Water Liquid Cooling |

    Science.gov Websites

    Computational Science | NREL Warm-Water Liquid Cooling High-Performance Computing Data Center Warm-Water Liquid Cooling NREL's High-Performance Computing Data Center (HPC Data Center) is liquid water Liquid cooling technologies offer a more energy-efficient solution that also allows for effective

  6. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    NASA Astrophysics Data System (ADS)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  7. Utilizing fast multipole expansions for efficient and accurate quantum-classical molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul

    2015-03-01

    Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.

  8. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  9. 77 FR 39748 - Computer Matching and Privacy Protection Act of 1988; Report of Matching Program: RRB and State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-05

    ... RAILROAD RETIREMENT BOARD Computer Matching and Privacy Protection Act of 1988; Report of Matching.... General The Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), amended the Privacy... of an existing computer matching program due to expire on August 12, 2012. SUMMARY: The Privacy Act...

  10. 76 FR 71417 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Law Enforcement Agencies (LEA...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-17

    ...; Computer Matching Program (SSA/ Law Enforcement Agencies (LEA)) Match Number 5001 AGENCY: Social Security... protections for such persons. The Privacy Act, as amended, regulates the use of computer matching by Federal... accordance with the Privacy Act of 1974, as amended by the Computer Matching and Privacy Protection Act of...

  11. Marc Henry de Frahan | NREL

    Science.gov Websites

    Computing Project, Marc develops high-fidelity turbulence models to enhance simulation accuracy and efficient numerical algorithms for future high performance computing hardware architectures. Research Interests High performance computing High order numerical methods for computational fluid dynamics Fluid

  12. 77 FR 62059 - Privacy Act of 1974, as Amended; Revisions to Existing Systems of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-11

    ... and forms, microfilm or microfiche, and in computer processable storage media such as personnel system... 1974; the Federal Information Security Management Act of 2002; the Computer Fraud and Abuse Act of 1986... apply: The Privacy Act of 1974; the Federal Information Security Management Act of 2002; the Computer...

  13. High-performance conjugate-gradient benchmark: A new metric for ranking high-performance computing systems

    DOE PAGES

    Dongarra, Jack; Heroux, Michael A.; Luszczek, Piotr

    2015-08-17

    Here, we describe a new high-performance conjugate-gradient (HPCG) benchmark. HPCG is composed of computations and data-access patterns commonly found in scientific applications. HPCG strives for a better correlation to existing codes from the computational science domain and to be representative of their performance. Furthermore, HPCG is meant to help drive the computer system design and implementation in directions that will better impact future performance improvement.

  14. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    ERIC Educational Resources Information Center

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  15. POEM: Identifying Joint Additive Effects on Regulatory Circuits.

    PubMed

    Botzman, Maya; Nachshon, Aharon; Brodt, Avital; Gat-Viks, Irit

    2016-01-01

    Expression Quantitative Trait Locus (eQTL) mapping tackles the problem of identifying variation in DNA sequence that have an effect on the transcriptional regulatory network. Major computational efforts are aimed at characterizing the joint effects of several eQTLs acting in concert to govern the expression of the same genes. Yet, progress toward a comprehensive prediction of such joint effects is limited. For example, existing eQTL methods commonly discover interacting loci affecting the expression levels of a module of co-regulated genes. Such "modularization" approaches, however, are focused on epistatic relations and thus have limited utility for the case of additive (non-epistatic) effects. Here we present POEM (Pairwise effect On Expression Modules), a methodology for identifying pairwise eQTL effects on gene modules. POEM is specifically designed to achieve high performance in the case of additive joint effects. We applied POEM to transcription profiles measured in bone marrow-derived dendritic cells across a population of genotyped mice. Our study reveals widespread additive, trans-acting pairwise effects on gene modules, characterizes their organizational principles, and highlights high-order interconnections between modules within the immune signaling network. These analyses elucidate the central role of additive pairwise effect in regulatory circuits, and provide computational tools for future investigations into the interplay between eQTLs. The software described in this article is available at csgi.tau.ac.il/POEM/.

  16. POEM: Identifying Joint Additive Effects on Regulatory Circuits

    PubMed Central

    Botzman, Maya; Nachshon, Aharon; Brodt, Avital; Gat-Viks, Irit

    2016-01-01

    Motivation: Expression Quantitative Trait Locus (eQTL) mapping tackles the problem of identifying variation in DNA sequence that have an effect on the transcriptional regulatory network. Major computational efforts are aimed at characterizing the joint effects of several eQTLs acting in concert to govern the expression of the same genes. Yet, progress toward a comprehensive prediction of such joint effects is limited. For example, existing eQTL methods commonly discover interacting loci affecting the expression levels of a module of co-regulated genes. Such “modularization” approaches, however, are focused on epistatic relations and thus have limited utility for the case of additive (non-epistatic) effects. Results: Here we present POEM (Pairwise effect On Expression Modules), a methodology for identifying pairwise eQTL effects on gene modules. POEM is specifically designed to achieve high performance in the case of additive joint effects. We applied POEM to transcription profiles measured in bone marrow-derived dendritic cells across a population of genotyped mice. Our study reveals widespread additive, trans-acting pairwise effects on gene modules, characterizes their organizational principles, and highlights high-order interconnections between modules within the immune signaling network. These analyses elucidate the central role of additive pairwise effect in regulatory circuits, and provide computational tools for future investigations into the interplay between eQTLs. Availability: The software described in this article is available at csgi.tau.ac.il/POEM/. PMID:27148351

  17. Multisensor surveillance data augmentation and prediction with optical multipath signal processing

    NASA Astrophysics Data System (ADS)

    Bush, G. T., III

    1980-12-01

    The spatial characteristics of an oil spill on the high seas are examined in the interest of determining whether linear-shift-invariant data processing implemented on an optical computer would be a useful tool in analyzing spill behavior. Simulations were performed on a digital computer using data obtained from a 25,000 gallon spill of soy bean oil in the open ocean. Marked changes occurred in the observed spatial frequencies when the oil spill was encountered. An optical detector may readily be developed to sound an alarm automatically when this happens. The average extent of oil spread between sequential observations was quantified by a simulation of non-holographic optical computation. Because a zero crossover was available in this computation, it may be possible to construct a system to measure automatically the amount of spread. Oil images were subjected to deconvolutional filtering to reveal the force field which acted upon the oil to cause spreading. Some features of spill-size prediction were observed. Calculations based on two sequential photos produced an image which exhibited characteristics of the third photo in that sequence.

  18. Advanced communications technology satellite high burst rate link evaluation terminal power control and rain fade software test plan, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Power Control and Rain Fade Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Power Control and Rain Fade Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Power Control and Rain Fade Software automatically controls the LET uplink power to compensate for signal fades. Besides power augmentation, the C&PM Software system is also responsible for instrument control during HBR-LET experiments, control of the Intermediate Frequency Switch Matrix on board the ACTS to yield a desired path through the spacecraft payload, and data display. The Power Control and Rain Fade Software User's Guide, Version 1.0 outlines the commands and procedures to install and operate the Power Control and Rain Fade Software. The Power Control and Rain Fade Software Maintenance Manual, Version 1.0 is a programmer's guide to the Power Control and Rain Fade Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Power Control and Rain Fade Software, computer algorithms, format representations, and computer hardware configuration. The Power Control and Rain Fade Test Plan provides a step-by-step procedure to verify the operation of the software using a predetermined signal fade event. The Test Plan also provides a means to demonstrate the capability of the software.

  19. Advanced communications technology satellite high burst rate link evaluation terminal communication protocol software user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Communication Protocol Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Communication Protocol Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Communication Protocol Software allows users to control and configure the Intermediate Frequency Switch Matrix (IFSM) on board the ACTS to yield a desired path through the spacecraft payload. Besides IFSM control, the C&PM Software System is also responsible for instrument control during HBR-LET experiments, uplink power control of the HBR-LET to demonstrate power augmentation during signal fade events, and data display. The Communication Protocol Software User's Guide, Version 1.0 (NASA CR-189162) outlines the commands and procedures to install and operate the Communication Protocol Software. Configuration files used to control the IFSM, operator commands, and error recovery procedures are discussed. The Communication Protocol Software Maintenance Manual, Version 1.0 (NASA CR-189163, to be published) is a programmer's guide to the Communication Protocol Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Communication Protocol Software, computer algorithms, format representations, and computer hardware configuration. The Communication Protocol Software Test Plan (NASA CR-189164, to be published) provides a step-by-step procedure to verify the operation of the software. Included in the Test Plan is command transmission, telemetry reception, error detection, and error recovery procedures.

  20. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 2 2012-01-01 2012-01-01 false High performance computers: Post... Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification...

  1. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false High performance computers: Post... Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification...

  2. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false High performance computers: Post... Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification...

  3. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 2 2013-01-01 2013-01-01 false High performance computers: Post... Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification...

  4. 76 FR 11435 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-02

    ... Security Administration. SUMMARY: Pursuant to the Computer Matching and Privacy Protection Act of 1988, Public Law 100-503, the Computer Matching and Privacy Protections Amendments of 1990, Pub. L. 101-508... Interpreting the Provisions of Public Law 100-503, the Computer Matching and Privacy Protection Act of 1988...

  5. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false High performance computers: Post... Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS SPECIAL REPORTING AND NOTIFICATION § 743.2 High performance computers: Post shipment...

  6. Improving College Performance and Retention the Easy Way: Unpacking the ACT Exam. NBER Working Paper No. 17119

    ERIC Educational Resources Information Center

    Bettinger, Eric P.; Evans, Brent J.; Pope, Devin G.

    2011-01-01

    Colleges rely on the ACT exam in their admission decisions to increase their ability to differentiate between students likely to succeed and those that have a high risk of under-performing and dropping out. We show that two of the four sub tests of the ACT, English and Mathematics, are highly predictive of positive college outcomes while the other…

  7. Alliance for Computational Science Collaboration: HBCU Partnership at Alabama A&M University Continuing High Performance Computing Research and Education at AAMU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Xiaoqing; Deng, Z. T.

    2009-11-10

    This is the final report for the Department of Energy (DOE) project DE-FG02-06ER25746, entitled, "Continuing High Performance Computing Research and Education at AAMU". This three-year project was started in August 15, 2006, and it was ended in August 14, 2009. The objective of this project was to enhance high performance computing research and education capabilities at Alabama A&M University (AAMU), and to train African-American and other minority students and scientists in the computational science field for eventual employment with DOE. AAMU has successfully completed all the proposed research and educational tasks. Through the support of DOE, AAMU was able tomore » provide opportunities to minority students through summer interns and DOE computational science scholarship program. In the past three years, AAMU (1). Supported three graduate research assistants in image processing for hypersonic shockwave control experiment and in computational science related area; (2). Recruited and provided full financial support for six AAMU undergraduate summer research interns to participate Research Alliance in Math and Science (RAMS) program at Oak Ridge National Lab (ORNL); (3). Awarded highly competitive 30 DOE High Performance Computing Scholarships ($1500 each) to qualified top AAMU undergraduate students in science and engineering majors; (4). Improved high performance computing laboratory at AAMU with the addition of three high performance Linux workstations; (5). Conducted image analysis for electromagnetic shockwave control experiment and computation of shockwave interactions to verify the design and operation of AAMU-Supersonic wind tunnel. The high performance computing research and education activities at AAMU created great impact to minority students. As praised by Accreditation Board for Engineering and Technology (ABET) in 2009, ?The work on high performance computing that is funded by the Department of Energy provides scholarships to undergraduate students as computational science scholars. This is a wonderful opportunity to recruit under-represented students.? Three ASEE papers were published in 2007, 2008 and 2009 proceedings of ASEE Annual Conferences, respectively. Presentations of these papers were also made at the ASEE Annual Conferences. It is very critical to continue the research and education activities.« less

  8. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    NASA Technical Reports Server (NTRS)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  9. 75 FR 53004 - Privacy Act of 1974, as Amended; Notice of Computer-Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-30

    ... report of this computer-matching program with the Committee on Homeland Security and Governmental Affairs... INFORMATION: A. General The Computer-Matching and Privacy Protection Act of 1988, (Pub. L. 100-503), amended... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as Amended; Notice of Computer-Matching Program...

  10. 76 FR 77015 - Privacy Act of 1974; Computer Matching Agreement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-09

    ... 1974 (5 U.S.C. 552a), as amended by the Computer Matching and Privacy Protection Act of 1988 (Pub. L... DEPARTMENT OF JUSTICE [AAG/A Order No. 001/2011] Privacy Act of 1974; Computer Matching Agreement AGENCY: Department of Justice. ACTION: Notice--computer matching between the Department of Justice and...

  11. 78 FR 12128 - Privacy Act of 1974; Computer Matching Program (SSA/Department of the Treasury, Internal Revenue...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-21

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2012-0067] Privacy Act of 1974; Computer Matching... Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching program... INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988 (Public Law (Pub. L.) 100-503...

  12. Reliability model of a monopropellant auxiliary propulsion system

    NASA Technical Reports Server (NTRS)

    Greenberg, J. S.

    1971-01-01

    A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.

  13. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  14. Grand Challenges 1993: High Performance Computing and Communications. A Report by the Committee on Physical, Mathematical, and Engineering Sciences. The FY 1993 U.S. Research and Development Program.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    This report presents the United States research and development program for 1993 for high performance computing and computer communications (HPCC) networks. The first of four chapters presents the program goals and an overview of the federal government's emphasis on high performance computing as an important factor in the nation's scientific and…

  15. Use of medical information by computer networks raises major concerns about privacy.

    PubMed Central

    OReilly, M

    1995-01-01

    The development of computer data-bases and long-distance computer networks is leading to improvements in Canada's health care system. However, these developments come at a cost and require a balancing act between access and confidentiality. Columnist Michael OReilly, who in this article explores the security of computer networks, notes that respect for patients' privacy must be given as high a priority as the ability to see their records in the first place. Images p213-a PMID:7600474

  16. A floating-point/multiple-precision processor for airborne applications

    NASA Technical Reports Server (NTRS)

    Yee, R.

    1982-01-01

    A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.

  17. 77 FR 2299 - Office of Child Support Enforcement; Privacy Act of 1974; Computer Matching Agreement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... Support Enforcement; Privacy Act of 1974; Computer Matching Agreement AGENCY: Office of Child Support Enforcement (OCSE), ACF, HHS. ACTION: Notice of a Computer Matching Program. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 522a), as amended, OCSE is publishing notice of a computer matching program...

  18. 76 FR 5235 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA Internal Match)-Match Number 1014

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-28

    ...; Computer Matching Program (SSA Internal Match)--Match Number 1014 AGENCY: Social Security Administration... regarding protections for such persons. The Privacy Act, as amended, regulates the use of computer matching....C. 552a, as amended, and the provisions of the Computer Matching and Privacy Protection Act of 1988...

  19. 77 FR 74019 - Office of Child Support Enforcement; Privacy Act of 1974; Computer Matching Agreement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-12

    ... Support Enforcement; Privacy Act of 1974; Computer Matching Agreement AGENCY: Office of Child Support Enforcement (OCSE), ACF, HHS. ACTION: Notice of a Computer Matching Program. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 522a), as amended, OCSE is publishing notice of a computer matching program...

  20. 78 FR 70971 - Privacy Act of 1974, as Amended; Notice of Computer Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ... will file a report of this computer-matching program with the Committee on Homeland Security and... . SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988, (Pub. L. 100-503... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as Amended; Notice of Computer Matching Program...

  1. 75 FR 29774 - Office of Child Support Enforcement; Privacy Act of 1974; Computer Matching Agreement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-27

    ... Support Enforcement; Privacy Act of 1974; Computer Matching Agreement AGENCY: Office of Child Support Enforcement (OCSE), ACF, HHS. ACTION: Notice of a computer matching program. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 522a), as amended, OCSE is publishing notice of a computer matching program...

  2. 75 FR 31457 - Office of Child Support Enforcement; Privacy Act of 1974; Computer Matching Agreement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-03

    ... Support Enforcement; Privacy Act of 1974; Computer Matching Agreement AGENCY: Office of Child Support Enforcement (OCSE), ACF, HHS. ACTION: Notice of a Computer Matching Program. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 522a), as amended, OCSE is publishing notice of a computer matching program...

  3. 78 FR 69925 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Bureau of the Fiscal Service...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-21

    ... regarding protections for such persons. The Privacy Act, as amended, regulates the use of computer matching... savings securities. C. Authority for Conducting the Matching Program This computer matching agreement sets... amended by the Computer Matching and Privacy Protection Act of 1988, as amended, and the regulations and...

  4. 77 FR 32085 - Privacy Act of 1974, as Amended; Renewal of Computer Matching Program Between the U.S. Department...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-31

    ... Matching and Privacy Protection Act of 1988 (Pub. L. 100-503) and the Computer Matching and Privacy... DEPARTMENT OF EDUCATION Privacy Act of 1974, as Amended; Renewal of Computer Matching Program.... ACTION: Notice. SUMMARY: This document provides notice of the renewal of the computer matching program...

  5. Respiration-Averaged CT for Attenuation Correction of PET Images – Impact on PET Texture Features in Non-Small Cell Lung Cancer Patients

    PubMed Central

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Tsan, Din-Li

    2016-01-01

    Purpose We compared attenuation correction of PET images with helical CT (PET/HCT) and respiration-averaged CT (PET/ACT) in patients with non-small-cell lung cancer (NSCLC) with the goal of investigating the impact of respiration-averaged CT on 18F FDG PET texture parameters. Materials and Methods A total of 56 patients were enrolled. Tumors were segmented on pretreatment PET images using the adaptive threshold. Twelve different texture parameters were computed: standard uptake value (SUV) entropy, uniformity, entropy, dissimilarity, homogeneity, coarseness, busyness, contrast, complexity, grey-level nonuniformity, zone-size nonuniformity, and high grey-level large zone emphasis. Comparisons of PET/HCT and PET/ACT were performed using Wilcoxon signed-rank tests, intraclass correlation coefficients, and Bland-Altman analysis. Receiver operating characteristic (ROC) curves as well as univariate and multivariate Cox regression analyses were used to identify the parameters significantly associated with disease-specific survival (DSS). A fixed threshold at 45% of the maximum SUV (T45) was used for validation. Results SUV maximum and total lesion glycolysis (TLG) were significantly higher in PET/ACT. However, texture parameters obtained with PET/ACT and PET/HCT showed a high degree of agreement. The lowest levels of variation between the two modalities were observed for SUV entropy (9.7%) and entropy (9.8%). SUV entropy, entropy, and coarseness from both PET/ACT and PET/HCT were significantly associated with DSS. Validation analyses using T45 confirmed the usefulness of SUV entropy and entropy in both PET/HCT and PET/ACT for the prediction of DSS, but only coarseness from PET/ACT achieved the statistical significance threshold. Conclusions Our results indicate that 1) texture parameters from PET/ACT are clinically useful in the prediction of survival in NSCLC patients and 2) SUV entropy and entropy are robust to attenuation correction methods. PMID:26930211

  6. Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine

    NASA Astrophysics Data System (ADS)

    Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.

    2017-12-01

    Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.

  7. 75 FR 18841 - Office for Civil Rights; Privacy Act of 1974, Amended System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-13

    ... Privacy Act of 1974, Federal Information Security Management Act of 2002, Computer Security Act of 1987... 1974, Federal Information Security Management Act of 2002, Computer Security Act of 1987, the Paperwork... Oversight, the Chair of the Senate Committee on Homeland Security and Governmental Affairs, and the...

  8. 78 FR 15731 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary [Docket No. DHS-2013-0011] Privacy Act of 1974; Computer Matching Program AGENCY: Department of Homeland Security/U.S. Citizenship and Immigration Services. ACTION: Notice. Overview Information: Privacy Act of 1974; Computer Matching Program...

  9. 78 FR 15732 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary [Docket No. DHS-2013-0007] Privacy Act of 1974; Computer Matching Program AGENCY: Department of Homeland Security/U.S. Citizenship and Immigration Services. ACTION: Notice. Overview Information: Privacy Act of 1974; Computer Matching Program...

  10. Remote Earth Sciences data collection using ACTS

    NASA Technical Reports Server (NTRS)

    Evans, Robert H.

    1992-01-01

    Given the focus on global change and the attendant scope of such research, we anticipate significant growth of requirements for investigator interaction, processing system capabilities, and availability of data sets. The increased complexity of global processes requires interdisciplinary teams to address them; the investigators will need to interact on a regular basis; however, it is unlikely that a single institution will house sufficient investigators with the required breadth of skills. The complexity of the computations may also require resources beyond those located within a single institution; this lack of sufficient computational resources leads to a distributed system located at geographically dispersed institutions. Finally the combination of long term data sets like the Pathfinder datasets and the data to be gathered by new generations of satellites such as SeaWiFS and MODIS-N yield extra-ordinarily large amounts of data. All of these factors combine to increase demands on the communications facilities available; the demands are generating requirements for highly flexible, high capacity networks. We have been examining the applicability of the Advanced Communications Technology Satellite (ACTS) to address the scientific, computational, and, primarily, communications questions resulting from global change research. As part of this effort three scenarios for oceanographic use of ACTS have been developed; a full discussion of this is contained in Appendix B.

  11. The Emergence Of The National Research And Education Network (NREN) And Its Implications For American Telecommunications

    NASA Astrophysics Data System (ADS)

    Maloff, Joel H.

    1990-01-01

    "The nation which most completely assimilates high performance computing into its economy will very likely emerge as the dominant intellectual, economic, and technological force in the next century", Senator Albert Gore, Jr., May 18, 1989, while introducing Senate Bill 1067, "The National High Performance Computer Technology Act of 1989". A national network designed to link supercomputers, particle accelerators, researchers, educators, government, and industry is beginning to emerge. The degree to which the United States can mobilize the resources inherent within our academic, industrial and government sectors towards the establishment of such a network infrastructure will have direct bearing on the economic and political stature of this country in the next century. This program will have significant impact on all forms of information transfer, and peripheral benefits to all walks of life similar to those experienced from the moon landing program of the 1960's. The key to our success is the involvement of scientists, librarians, network designers, and bureaucrats in the planning stages. Collectively, the resources resident within the United States are awesome; individually, their impact is somewhat more limited. The engineers, technicians, business people, and educators participating in this conference have a vital role to play in the success of the National Research and Education Network (NREN).

  12. Self-service for software development projects and HPC activities

    NASA Astrophysics Data System (ADS)

    Husejko, M.; Høimyr, N.; Gonzalez, A.; Koloventzos, G.; Asbury, D.; Trzcinska, A.; Agtzidis, I.; Botrel, G.; Otto, J.

    2014-05-01

    This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.

  13. Temporal response improvement for computed tomography fluoroscopy

    NASA Astrophysics Data System (ADS)

    Hsieh, Jiang

    1997-10-01

    Computed tomography fluoroscopy (CTF) has attracted significant attention recently. This is mainly due to the growing clinical application of CTF in interventional procedures, such as guided biopsy. Although many studies have been conducted for its clinical efficacy, little attention has been paid to the temporal response and the inherent limitations of the CTF system. For example, during a biopsy operation, when needle is inserted at a relatively high speed, the true needle position will not be correctly depicted in the CTF image due to the time delay. This could result in an overshoot or misplacement of the biopsy needle by the operator. In this paper, we first perform a detailed analysis of the temporal response of the CTF by deriving a set of equations to describe the average location of a moving object observed by the CTF system. The accuracy of the equations is verified by computer simulations and experiments. We show that the CT reconstruction process acts as a low pass filter to the motion function. As a result, there is an inherent time delay in the CTF process to the true biopsy needle motion and locations. Based on this study, we propose a generalized underscan weighting scheme which significantly improve the performance of CTF in terms of time lag and delay.

  14. Early experiences in developing and managing the neuroscience gateway.

    PubMed

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas T

    2015-02-01

    The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway.

  15. Early experiences in developing and managing the neuroscience gateway

    PubMed Central

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas. T.

    2015-01-01

    SUMMARY The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway. PMID:26523124

  16. Data Storage and Transfer | High-Performance Computing | NREL

    Science.gov Websites

    High-Performance Computing (HPC) systems. Photo of computer server wiring and lights, blurred to show data. WinSCP for Windows File Transfers Use to transfer files from a local computer to a remote computer. Robinhood for File Management Use this tool to manage your data files on Peregrine. Best

  17. Asymmetric Core Computing for U.S. Army High-Performance Computing Applications

    DTIC Science & Technology

    2009-04-01

    Playstation 4 (should one be announced). 8 4.2 FPGAs Reconfigurable computing refers to performing computations using Field Programmable Gate Arrays...2008 4 . TITLE AND SUBTITLE Asymmetric Core Computing for U.S. Army High-Performance Computing Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER...Acknowledgments vi  1.  Introduction 1  2.  Relevant Technologies 2  3.  Technical Approach 5  4 .  Research and Development Highlights 7  4.1  Cell

  18. HPCCP/CAS Workshop Proceedings 1998

    NASA Technical Reports Server (NTRS)

    Schulbach, Catherine; Mata, Ellen (Editor); Schulbach, Catherine (Editor)

    1999-01-01

    This publication is a collection of extended abstracts of presentations given at the HPCCP/CAS (High Performance Computing and Communications Program/Computational Aerosciences Project) Workshop held on August 24-26, 1998, at NASA Ames Research Center, Moffett Field, California. The objective of the Workshop was to bring together the aerospace high performance computing community, consisting of airframe and propulsion companies, independent software vendors, university researchers, and government scientists and engineers. The Workshop was sponsored by the HPCCP Office at NASA Ames Research Center. The Workshop consisted of over 40 presentations, including an overview of NASA's High Performance Computing and Communications Program and the Computational Aerosciences Project; ten sessions of papers representative of the high performance computing research conducted within the Program by the aerospace industry, academia, NASA, and other government laboratories; two panel sessions; and a special presentation by Mr. James Bailey.

  19. QMachine: commodity supercomputing in web browsers.

    PubMed

    Wilkinson, Sean R; Almeida, Jonas S

    2014-06-09

    Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics' "Big Data" from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running "download and install" software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments.

  20. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  1. High-performance computing on GPUs for resistivity logging of oil and gas wells

    NASA Astrophysics Data System (ADS)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  2. Optical interconnection networks for high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  3. Quantum Accelerators for High-performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less

  4. Staff | Computational Science | NREL

    Science.gov Websites

    develops and leads laboratory-wide efforts in high-performance computing and energy-efficient data centers Professional IV-High Perf Computing Jim.Albin@nrel.gov 303-275-4069 Ananthan, Shreyas Senior Scientist - High -Performance Algorithms and Modeling Shreyas.Ananthan@nrel.gov 303-275-4807 Bendl, Kurt IT Professional IV-High

  5. 76 FR 39119 - Privacy Act of 1974; Notice of a Computer Matching Program Between the Department of Housing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-05

    ... of a Computer Matching Program Between the Department of Housing and Urban Development (HUD) and the...: Notice of a computer matching program between the HUD and ED. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 552a), as amended by the Computer Matching and Privacy Protection Act of 1988 (Pub...

  6. 77 FR 74518 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-14

    ... OFFICE OF PERSONNEL MANAGEMENT Privacy Act of 1974; Computer Matching Program AGENCY: Office of Personnel Management. ACTION: Notice--computer matching between the Office of Personnel Management and the Social Security Administration. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 552a), as...

  7. 78 FR 35647 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-13

    ... OFFICE OF PERSONNEL MANAGEMENT Privacy Act of 1974; Computer Matching Program AGENCY: Office of Personnel Management. ACTION: Notice of computer matching between the Office of Personnel Management and the Social Security Administration (CMA 1045). SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C...

  8. 75 FR 17788 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-07

    ... OFFICE OF PERSONNEL MANAGEMENT Privacy Act of 1974; Computer Matching Program AGENCY: Office of Personnel Management. ACTION: Notice--computer matching between the Office of Personnel Management and the Social Security Administration. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 552a), as...

  9. 75 FR 31819 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-04

    ... OFFICE OF PERSONNEL MANAGEMENT Privacy Act of 1974; Computer Matching Program AGENCY: Office of Personnel Management. AGENCY: Notice--computer matching between the Office of Personnel Management and the Social Security Administration. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 552a), as...

  10. 78 FR 16564 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Office of Personnel Management...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-15

    ... 1021 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of existing computer... above. SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2012-0073] Privacy Act of 1974, as Amended...

  11. 75 FR 54162 - Privacy Act of 1974

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-03

    ... Program A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), amended the... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare and Medicaid Services [CMS Computer Match No. 2010-01; HHS Computer Match No. 1006] Privacy Act of 1974 AGENCY: Department of Health and...

  12. Role of information systems in controlling costs: the electronic medical record (EMR) and the high-performance computing and communications (HPCC) efforts

    NASA Astrophysics Data System (ADS)

    Kun, Luis G.

    1994-12-01

    On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.

  13. Computational Fluency Performance Profile of High School Students with Mathematics Disabilities

    ERIC Educational Resources Information Center

    Calhoon, Mary Beth; Emerson, Robert Wall; Flores, Margaret; Houchins, David E.

    2007-01-01

    The purpose of this descriptive study was to develop a computational fluency performance profile of 224 high school (Grades 9-12) students with mathematics disabilities (MD). Computational fluency performance was examined by grade-level expectancy (Grades 2-6) and skill area (whole numbers: addition, subtraction, multiplication, division;…

  14. 75 FR 30411 - Privacy Act of 1974; Report of a Modified or Altered System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-01

    ... Privacy Act of 1974; the Federal Information Security Management Act of 2002; the Computer Fraud and Abuse... Security Management Act of 2002; the Computer Fraud and Abuse Act of 1986; the Health Insurance Portability... systems and data files necessary for compliance with Title XI, Part C of the Social Security Act because...

  15. Detectors for the Atacama Cosmology Telescope

    NASA Astrophysics Data System (ADS)

    Marriage, Tobias Andrew

    The Atacama Cosmology Telescope (ACT) will make measurements of the brightness temperature anisotropy in the Cosmic Microwave Background (CMB) on degree to arcminute angular scales. The ACT observing site is located 5200 m near the top of Cerro Toco in the Atacama Desert of northern Chile. This thesis presents research on the detectors which capture the image of the CMB formed at ACT's focal plane. In the first chapter, the primary brightness temperature fluctuations in the Cosmic Microwave Background are reviewed. In Chapter 2, a calculation shows how the CMB brightness is translated by ACT to an input power to the detectors. Chapter 3 describes the ACT detectors in detail and presents the response and sensitivity of the detectors to the input power computed in Chapter 2. Chapter 4 describes the detector fabrication at NASA Goddard Space Flight Center. Chapter 5 summarizes experiments which characterize the ACT detector performance.

  16. Optoelectronic Inner-Product Neural Associative Memory

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1993-01-01

    Optoelectronic apparatus acts as artificial neural network performing associative recall of binary images. Recall process is iterative one involving optical computation of inner products between binary input vector and one or more reference binary vectors in memory. Inner-product method requires far less memory space than matrix-vector method.

  17. CW Interference Effects on High Data Rate Transmission Through the ACTS Wideband Channel

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Ngo, Duc H.; Tran, Quang K.; Tran, Diepchi T.; Yu, John; Kachmar, Brian A.; Svoboda, James S.

    1996-01-01

    Satellite communications channels are susceptible to various sources of interference. Wideband channels have a proportionally greater probability of receiving interference than narrowband channels. NASA's Advanced Communications Technology Satellite (ACTS) includes a 900 MHz bandwidth hardlimiting transponder which has provided an opportunity for the study of interference effects of wideband channels. A series of interference tests using two independent ACTS ground terminals measured the effects of continuous-wave (CW) uplink interference on the bit-error rate of a 220 Mbps digitally modulated carrier. These results indicate the susceptibility of high data rate transmissions to CW interference and are compared to results obtained with a laboratory hardware-based system simulation and a computer simulation.

  18. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.

  19. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  20. Effect of high-frequency spectral components in computer recognition of dysarthric speech based on a Mel-cepstral stochastic model.

    PubMed

    Polur, Prasad D; Miller, Gerald E

    2005-01-01

    Computer speech recognition of individuals with dysarthria, such as cerebral palsy patients, requires a robust technique that can handle conditions of very high variability and limited training data. In this study, a hidden Markov model (HMM) was constructed and conditions investigated that would provide improved performance for a dysarthric speech (isolated word) recognition system intended to act as an assistive/control tool. In particular, we investigated the effect of high-frequency spectral components on the recognition rate of the system to determine if they contributed useful additional information to the system. A small-size vocabulary spoken by three cerebral palsy subjects was chosen. Mel-frequency cepstral coefficients extracted with the use of 15 ms frames served as training input to an ergodic HMM setup. Subsequent results demonstrated that no significant useful information was available to the system for enhancing its ability to discriminate dysarthric speech above 5.5 kHz in the current set of dysarthric data. The level of variability in input dysarthric speech patterns limits the reliability of the system. However, its application as a rehabilitation/control tool to assist dysarthric motor-impaired individuals such as cerebral palsy subjects holds sufficient promise.

  1. Promoting High-Performance Computing and Communications. A CBO Study.

    ERIC Educational Resources Information Center

    Webre, Philip

    In 1991 the Federal Government initiated the multiagency High Performance Computing and Communications program (HPCC) to further the development of U.S. supercomputer technology and high-speed computer network technology. This overview by the Congressional Budget Office (CBO) concentrates on obstacles that might prevent the growth of the…

  2. HPC on Competitive Cloud Resources

    NASA Astrophysics Data System (ADS)

    Bientinesi, Paolo; Iakymchuk, Roman; Napper, Jeff

    Computing as a utility has reached the mainstream. Scientists can now easily rent time on large commercial clusters that can be expanded and reduced on-demand in real-time. However, current commercial cloud computing performance falls short of systems specifically designed for scientific applications. Scientific computing needs are quite different from those of the web applications that have been the focus of cloud computing vendors. In this chapter we demonstrate through empirical evaluation the computational efficiency of high-performance numerical applications in a commercial cloud environment when resources are shared under high contention. Using the Linpack benchmark as a case study, we show that cache utilization becomes highly unpredictable and similarly affects computation time. For some problems, not only is it more efficient to underutilize resources, but the solution can be reached sooner in realtime (wall-time). We also show that the smallest, cheapest (64-bit) instance on the studied environment is the best for price to performance ration. In light of the high-contention we witness, we believe that alternative definitions of efficiency for commercial cloud environments should be introduced where strong performance guarantees do not exist. Concepts like average, expected performance and execution time, expected cost to completion, and variance measures--traditionally ignored in the high-performance computing context--now should complement or even substitute the standard definitions of efficiency.

  3. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  4. 77 FR 49849 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Office of Child Support...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-17

    ...: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer-matching... INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988 (Public Law (Pub. L.) 100-503... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2012-0021] Privacy Act of 1974, as Amended...

  5. 75 FR 32833 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Office of Personnel Management...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-09

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA-2009-0077] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Office of Personnel Management (OPM))--Match 1307 AGENCY: Social Security... INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988 (Public Law (Pub. L.) 100-503...

  6. 77 FR 38880 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Railroad Retirement Board (SSA...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-29

    ... Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching program that... regarding protections for such persons. The Privacy Act, as amended, regulates the use of computer matching... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2012-0002] Privacy Act of 1974, as Amended...

  7. 77 FR 27108 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Office of Child Support...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-08

    ...: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching... protections for such persons. The Privacy Act, as amended, regulates the use of computer matching by Federal... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2012-0010] Privacy Act of 1974, as Amended...

  8. 78 FR 51264 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Department of the Treasury...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-20

    ... 1016 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer... above. SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2013-0022] Privacy Act of 1974, as Amended...

  9. 77 FR 24756 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Department of Labor (DOL))-Match...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-25

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2011-0084] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Department of Labor (DOL))--Match Number 1003 AGENCY: Social Security... above. SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988...

  10. 75 FR 18251 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Internal Revenue Service (IRS...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-09

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA-2009-0066] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Internal Revenue Service (IRS))--Match 1305 AGENCY: Social Security... INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988 (Public Law (Pub. L.) 100-503...

  11. 77 FR 24757 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Department of Labor (DOL))-Match...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-25

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2011-0083] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Department of Labor (DOL))--Match Number 1015 AGENCY: Social Security... regarding protections for such persons. The Privacy Act, as amended, regulates the use of computer matching...

  12. 75 FR 62623 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Internal Revenue Service (IRS...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-12

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2010-0015] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Internal Revenue Service (IRS))--Match Number 1016 AGENCY: Social Security... regarding protections for such persons. The Privacy Act, as amended, regulates the use of computer matching...

  13. 75 FR 59780 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Railroad Retirement Board (RRB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-28

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2010-0040] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Railroad Retirement Board (RRB))--Match Number 1006 AGENCY: Social Security...: A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L.) 100-503), amended the...

  14. Knowledge-Based Parallel Performance Technology for Scientific Application Competitiveness Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malony, Allen D; Shende, Sameer

    The primary goal of the University of Oregon's DOE "œcompetitiveness" project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translationmore » of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.« less

  15. 76 FR 21373 - Privacy Act of 1974; Report of a New System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-15

    ... Information Security Management Act of 2002; the Computer Fraud and Abuse Act of 1986; the Health Insurance... 1974; the Federal Information Security Management Act of 2002; the Computer Fraud and Abuse Act of 1986... established by State law; (3) support litigation involving the Agency; (4) combat fraud, waste, and abuse in...

  16. 76 FR 50198 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-12

    ... DEPARTMENT OF EDUCATION Privacy Act of 1974; Computer Matching Program AGENCY: Office of the Inspector General, U.S. Department of Education. ACTION: Notice of computer matching between the U.S... conduct of computer matching programs, notice is hereby given of the establishment of a computer matching...

  17. Senescent sperm performance in old male birds.

    PubMed

    Møller, A P; Mousseau, T A; Rudolfsen, G; Balbontín, J; Marzal, A; Hermosell, I; De Lope, F

    2009-02-01

    Senescence is the deterioration of the phenotype with age caused by negative effects of mutations acting late in life or the physiological deterioration of vital processes. Birds have traditionally been assumed to senescence slowly despite their high metabolic rates, high blood sugar levels and high body temperature. Here we investigate the patterns of age-related performance of sperm of a long distance migrant, the barn swallow Hirundo rustica, varying in age from 1 to 6 years, analysed by the computer-assisted sperm analysis equipment. Sperm showed deteriorating performance in terms of linear movement, track velocity, straight line velocity and reduced proportions of rapidly moving, progressive and motile sperm with age. In a second series of experiments, we assessed performance of sperm from the same males in neutral medium and in medium derived from the reproductive tract of females in an attempt to test if sperm of old males performed relatively better in female medium, as expected from extra-pair paternity being negatively related to male age, but not to female age. Older males showed consistently better performance in female medium than in neutral medium in terms of track velocity, straight line velocity and reduced proportions of rapidly moving, progressive and motile sperm, whereas young males showed better performance in neutral medium. These results provide evidence of declining sperm performance for important reproductive variables not only with age, but also with the sperm of old males performing differentially better in female medium than young males.

  18. 15 CFR Appendix B to Part 30 - AES Filing Codes

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... exemptions: Currency Airline tickets Bank notes Internal revenue stamps State liquor stamps Advertising...—Trans-Alaska Pipeline Authorization Act C50ENC—Encryption Commodities and Software C51AGR—License Exception Agricultural Commodities C53APP—Adjusted Peak Performance (Computers) C54SS-WRC—Western Red Cedar...

  19. 15 CFR Appendix B to Part 30 - AES Filing Codes

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... exemptions: Currency Airline tickets Bank notes Internal revenue stamps State liquor stamps Advertising...—Trans-Alaska Pipeline Authorization Act C50ENC—Encryption Commodities and Software C51AGR—License Exception Agricultural Commodities C53APP—Adjusted Peak Performance (Computers) C54SS-WRC—Western Red Cedar...

  20. Meriden Public Library, Final Performance Report for Library Services and Construction Act (LSCA) Title VI, Library Literacy Program.

    ERIC Educational Resources Information Center

    MacCabe, Bruce

    The Literacy Learning Center Project, a project of the Meriden Public Library (Connecticut), targeted the educationally underserved and functionally illiterate, and involved recruitment, retention, space renovation, coalition building, public awareness, training, basic literacy, collection development, tutoring, computer assisted services, and…

  1. 78 FR 73195 - Privacy Act of 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-05

    .... Description of the Matching Program A. General The Computer Matching and Privacy Protection Act of 1988 (Pub... 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching Program Match No. 1312...). ACTION: Notice of Computer Matching Program (CMP). SUMMARY: In accordance with the requirements of the...

  2. 75 FR 9012 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/U.S. Department of Health and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-26

    ... INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), amended... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA-2009-0052] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ U.S. Department of Health and Human Services (HHS), Administration for...

  3. Damage-mitigating control of aircraft for high performance and life extension

    NASA Astrophysics Data System (ADS)

    Caplin, Jeffrey

    1998-12-01

    A methodology is proposed for the synthesis of a Damage-Mitigating Control System for a high-performance fighter aircraft. The design of such a controller involves consideration of damage to critical points of the structure, as well as the performance requirements of the aircraft. This research is interdisciplinary, and brings existing knowledge in the fields of unsteady aerodynamics, structural dynamics, fracture mechanics, and control theory together to formulate a new approach towards aircraft flight controller design. A flexible wing model is formulated using the Finite Element Method, and the important mode shapes and natural frequencies are identified. The Doublet Lattice Method is employed to develop an unsteady flow model for computation of the unsteady aerodynamic loads acting on the wing due to rigid-body maneuvers and structural deformation. These two models are subsequently incorporated into a pre-existing nonlinear rigid-body aircraft flight-dynamic model. A family of robust Damage-Mitigating Controllers is designed using the Hinfinity-optimization and mu-synthesis method. In addition to weighting the error between the ideal performance and the actual performance of the aircraft, weights are also placed on the strain amplitude at the root of each wing. The results show significant savings in fatigue life of the wings while retaining the dynamic performance of the aircraft.

  4. 77 FR 38610 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-28

    ... DEPARTMENT OF EDUCATION Privacy Act of 1974; Computer Matching Program AGENCY: Department of Education. ACTION: Notice--Computer matching agreement between the Department of Education and the Department of Defense. SUMMARY: This document provides notice of the continuation of the computer matching...

  5. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  6. High performance systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  7. Students Upgrading through Computer and Career Education System Services (Project SUCCESS). Final Evaluation Report 1993-94. OER Report.

    ERIC Educational Resources Information Center

    Greene, Judy

    Students Upgrading through Computer and Career Education System Services (Project SUCCESS) was an Elementary and Secondary Education Act Title VII-funded project in its fourth year of operation. The project operated at two high schools in Brooklyn and one in Manhattan (New York). In the 1993-94 school year, the project served 393 students of…

  8. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  9. A Machine LearningFramework to Forecast Wave Conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; James, S. C.; O'Donncha, F.

    2017-12-01

    Recently, significant effort has been undertaken to quantify and extract wave energy because it is renewable, environmental friendly, abundant, and often close to population centers. However, a major challenge is the ability to accurately and quickly predict energy production, especially across a 48-hour cycle. Accurate forecasting of wave conditions is a challenging undertaking that typically involves solving the spectral action-balance equation on a discretized grid with high spatial resolution. The nature of the computations typically demands high-performance computing infrastructure. Using a case-study site at Monterey Bay, California, a machine learning framework was trained to replicate numerically simulated wave conditions at a fraction of the typical computational cost. Specifically, the physics-based Simulating WAves Nearshore (SWAN) model, driven by measured wave conditions, nowcast ocean currents, and wind data, was used to generate training data for machine learning algorithms. The model was run between April 1st, 2013 and May 31st, 2017 generating forecasts at three-hour intervals yielding 11,078 distinct model outputs. SWAN-generated fields of 3,104 wave heights and a characteristic period could be replicated through simple matrix multiplications using the mapping matrices from machine learning algorithms. In fact, wave-height RMSEs from the machine learning algorithms (9 cm) were less than those for the SWAN model-verification exercise where those simulations were compared to buoy wave data within the model domain (>40 cm). The validated machine learning approach, which acts as an accurate surrogate for the SWAN model, can now be used to perform real-time forecasts of wave conditions for the next 48 hours using available forecasted boundary wave conditions, ocean currents, and winds. This solution has obvious applications to wave-energy generation as accurate wave conditions can be forecasted with over a three-order-of-magnitude reduction in computational expense. The low computational cost (and by association low computer-power requirement) means that the machine learning algorithms could be installed on a wave-energy converter as a form of "edge computing" where a device could forecast its own 48-hour energy production.

  10. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  11. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  12. David Sickinger | NREL

    Science.gov Websites

    Sickinger David Sickinger Researcher III-High Performance Computing David.Sickinger@nrel.gov | 303 -275-3724 David Sickinger works with NREL's High Performance Computing Systems & Operations group

  13. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Robert; Ang, James; Bergman, Keren

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a systemmore » that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.« less

  14. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  15. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Robert C.; Ray, Jaideep; Malony, A.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  16. QMachine: commodity supercomputing in web browsers

    PubMed Central

    2014-01-01

    Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605

  17. A Multidimensional Perspective of College Readiness: Relating Student and School Characteristics to Performance on the ACT®. ACT Research Report Series 2015 (6)

    ERIC Educational Resources Information Center

    McNeish, Daniel M.; Radunzel, Justine; Sanchez, Edgar

    2015-01-01

    This study examined the contributions of students' noncognitive characteristics toward explaining performance on the ACT® test, over and above traditional predictors such as high school grade point average (HSGPA), coursework taken, and school characteristics. The sample consisted of 6,440 high school seniors from 4,541 schools who took the ACT in…

  18. High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects

    NASA Technical Reports Server (NTRS)

    Schutt-Aine, Jose E.

    1996-01-01

    The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.

  19. A Boundary Delineation System for the Bureau of Ocean Energy Management

    NASA Astrophysics Data System (ADS)

    Vandegraft, Douglas L.

    2018-05-01

    Federal government mapping of the offshore areas of the United States in support of the development of oil and gas resources began in 1954. The first mapping system utilized a network of rectangular blocks defined by State Plane coordinates which was later revised to utilize the Universal Transverse Mercator grid. Creation of offshore boundaries directed by the Submerged Lands Act and Outer Continental Shelf Lands Act were mathematically determined using early computer programs that performed the required computations, but required many steps. The Bureau of Ocean Energy Management has revised these antiquated methods using GIS technology which provide the required accuracy and produce the mapping products needed for leasing of energy resources, including renewable energy projects, on the outer continental shelf. (Note: this is an updated version of a paper of the same title written and published in 2015).

  20. Studies on the ionospheric-thermospheric coupling mechanisms using SLR

    NASA Astrophysics Data System (ADS)

    Panzetta, Francesca; Erdogan, Eren; Bloßfeld, Mathis; Schmidt, Michael

    2016-04-01

    Several Low Earth Orbiters (LEOs) have been used by different research groups to model the thermospheric neutral density distribution at various altitudes performing Precise Orbit Determination (POD) in combination with satellite accelerometry. This approach is, in principle, based on satellite drag analysis, driven by the fact that the drag force is one of the major perturbing forces acting on LEOs. The satellite drag itself is physically related to the thermospheric density. The present contribution investigates the possibility to compute the thermospheric density from Satellite Laser Ranging (SLR) observations. SLR is commonly used to compute very accurate satellite orbits. As a prerequisite, a very high precise modelling of gravitational and non-gravitational accelerations is necessary. For this investigation, a sensitivity study of SLR observations to thermospheric density variations is performed using the DGFI Orbit and Geodetic parameter estimation Software (DOGS). SLR data from satellites at altitudes lower than 500 km are processed adopting different thermospheric models. The drag coefficients which describe the interaction of the satellite surfaces with the atmosphere are analytically computed in order to obtain scaling factors purely related to the thermospheric density. The results are reported and discussed in terms of estimates of scaling coefficients of the thermospheric density. Besides, further extensions and improvements in thermospheric density modelling obtained by combining a physics-based approach with ionospheric observations are investigated. For this purpose, the coupling mechanisms between the thermosphere and ionosphere are studied.

  1. Effect of computer game playing on baseline laparoscopic simulator skills.

    PubMed

    Halvorsen, Fredrik H; Cvancarova, Milada; Fosse, Erik; Mjåland, Odd

    2013-08-01

    Studies examining the possible association between computer game playing and laparoscopic performance in general have yielded conflicting results and neither has a relationship between computer game playing and baseline performance on laparoscopic simulators been established. The aim of this study was to examine the possible association between previous and present computer game playing and baseline performance on a virtual reality laparoscopic performance in a sample of potential future medical students. The participating students completed a questionnaire covering the weekly amount and type of computer game playing activity during the previous year and 3 years ago. They then performed 2 repetitions of 2 tasks ("gallbladder dissection" and "traverse tube") on a virtual reality laparoscopic simulator. Performance on the simulator were then analyzed for association to their computer game experience. Local high school, Norway. Forty-eight students from 2 high school classes volunteered to participate in the study. No association between prior and present computer game playing and baseline performance was found. The results were similar both for prior and present action game playing and prior and present computer game playing in general. Our results indicate that prior and present computer game playing may not affect baseline performance in a virtual reality simulator.

  2. Automated High-Throughput Quantification of Mitotic Spindle Positioning from DIC Movies of Caenorhabditis Embryos

    PubMed Central

    Cluet, David; Spichty, Martin; Delattre, Marie

    2014-01-01

    The mitotic spindle is a microtubule-based structure that elongates to accurately segregate chromosomes during anaphase. Its position within the cell also dictates the future cell cleavage plan, thereby determining daughter cell orientation within a tissue or cell fate adoption for polarized cells. Therefore, the mitotic spindle ensures at the same time proper cell division and developmental precision. Consequently, spindle dynamics is the matter of intensive research. Among the different cellular models that have been explored, the one-cell stage C. elegans embryo has been an essential and powerful system to dissect the molecular and biophysical basis of spindle elongation and positioning. Indeed, in this large and transparent cell, spindle poles (or centrosomes) can be easily detected from simple DIC microscopy by human eyes. To perform quantitative and high-throughput analysis of spindle motion, we developed a computer program ACT for Automated-Centrosome-Tracking from DIC movies of C. elegans embryos. We therefore offer an alternative to the image acquisition and processing of transgenic lines expressing fluorescent spindle markers. Consequently, experiments on large sets of cells can be performed with a simple setup using inexpensive microscopes. Moreover, analysis of any mutant or wild-type backgrounds is accessible because laborious rounds of crosses with transgenic lines become unnecessary. Last, our program allows spindle detection in other nematode species, offering the same quality of DIC images but for which techniques of transgenesis are not accessible. Thus, our program also opens the way towards a quantitative evolutionary approach of spindle dynamics. Overall, our computer program is a unique macro for the image- and movie-processing platform ImageJ. It is user-friendly and freely available under an open-source licence. ACT allows batch-wise analysis of large sets of mitosis events. Within 2 minutes, a single movie is processed and the accuracy of the automated tracking matches the precision of the human eye. PMID:24763198

  3. Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems

    PubMed Central

    Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.

    2014-01-01

    The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545

  4. Petascale supercomputing to accelerate the design of high-temperature alloys

    DOE PAGES

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; ...

    2017-10-25

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less

  5. Petascale supercomputing to accelerate the design of high-temperature alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less

  6. Petascale supercomputing to accelerate the design of high-temperature alloys

    NASA Astrophysics Data System (ADS)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen

    2017-12-01

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.

  7. 32 CFR 505.13 - Computer Matching Agreement Program.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 3 2013-07-01 2013-07-01 false Computer Matching Agreement Program. 505.13... AUTHORITIES AND PUBLIC RELATIONS ARMY PRIVACY ACT PROGRAM § 505.13 Computer Matching Agreement Program. (a) General provisions. (1) Pursuant to the Privacy Act and this part, DA records may be subject to computer...

  8. 32 CFR 505.13 - Computer Matching Agreement Program.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 3 2012-07-01 2009-07-01 true Computer Matching Agreement Program. 505.13... AUTHORITIES AND PUBLIC RELATIONS ARMY PRIVACY ACT PROGRAM § 505.13 Computer Matching Agreement Program. (a) General provisions. (1) Pursuant to the Privacy Act and this part, DA records may be subject to computer...

  9. 32 CFR 505.13 - Computer Matching Agreement Program.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 3 2014-07-01 2014-07-01 false Computer Matching Agreement Program. 505.13... AUTHORITIES AND PUBLIC RELATIONS ARMY PRIVACY ACT PROGRAM § 505.13 Computer Matching Agreement Program. (a) General provisions. (1) Pursuant to the Privacy Act and this part, DA records may be subject to computer...

  10. 78 FR 50146 - Privacy Act of 1974: Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-16

    ... DEPARTMENT OF VETERANS AFFAIRS Privacy Act of 1974: Computer Matching Program AGENCY: Department of Veterans Affairs. ACTION: Notice of Computer Match Program. SUMMARY: Pursuant to 5 U.S.C. 552a... to conduct a computer matching program with the Internal Revenue Service (IRS). Data from the...

  11. 76 FR 47299 - Privacy Act of 1974: Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-04

    ... DEPARTMENT OF VETERANS AFFAIRS Privacy Act of 1974: Computer Matching Program AGENCY: Department of Veterans Affairs. ACTION: Notice of Computer Match Program. SUMMARY: Pursuant to 5 U.S.C. 552a... to conduct a computer matching program with the Internal Revenue Service (IRS). Data from the...

  12. 32 CFR 505.13 - Computer Matching Agreement Program.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 3 2011-07-01 2009-07-01 true Computer Matching Agreement Program. 505.13... AUTHORITIES AND PUBLIC RELATIONS ARMY PRIVACY ACT PROGRAM § 505.13 Computer Matching Agreement Program. (a) General provisions. (1) Pursuant to the Privacy Act and this part, DA records may be subject to computer...

  13. Optimizing the Usability of Brain-Computer Interfaces.

    PubMed

    Zhang, Yin; Chase, Steve M

    2018-05-01

    Brain-computer interfaces are in the process of moving from the laboratory to the clinic. These devices act by reading neural activity and using it to directly control a device, such as a cursor on a computer screen. An open question in the field is how to map neural activity to device movement in order to achieve the most proficient control. This question is complicated by the fact that learning, especially the long-term skill learning that accompanies weeks of practice, can allow subjects to improve performance over time. Typical approaches to this problem attempt to maximize the biomimetic properties of the device in order to limit the need for extensive training. However, it is unclear if this approach would ultimately be superior to performance that might be achieved with a nonbiomimetic device once the subject has engaged in extended practice and learned how to use it. Here we approach this problem using ideas from optimal control theory. Under the assumption that the brain acts as an optimal controller, we present a formal definition of the usability of a device and show that the optimal postlearning mapping can be written as the solution of a constrained optimization problem. We then derive the optimal mappings for particular cases common to most brain-computer interfaces. Our results suggest that the common approach of creating biomimetic interfaces may not be optimal when learning is taken into account. More broadly, our method provides a blueprint for optimal device design in general control-theoretic contexts.

  14. Wayne Township Public Library, Final Performance Report for Library Services and Construction Act (LSCA) Title VI, Library Literacy Program.

    ERIC Educational Resources Information Center

    Smyth, Carol B.; Grannell, Dorothy S.; Moore, Miriam

    The Literacy Resource Center project, a program of the Wayne Township Public Library also known as the Morrisson-Reeves Library (Richmond, Indiana), involved recruitment, retention, coalition building, public awareness, training, basic literacy, collection development, tutoring, computer-assisted, other technology, employment oriented,…

  15. Hopkinsville-Christian County Library, Final Performance Report for Library Services and Construction Act (LSCA) Title VI, Library Literacy Program.

    ERIC Educational Resources Information Center

    Nevels, Vada Germaine

    The Hopkinsville-Christian County Library (Kentucky) conducted a project that involved recruitment, public awareness, basic literacy, collection development, tutoring, computer-assisted, other technology, and intergenerational/family programs. The project served a community of 50,000-100,000 people, and targeted the learning disabled,…

  16. Columbia County Public Library, Final Performance Report for Library Services and Construction Act (LSCA) Title VI, Library Literacy Program.

    ERIC Educational Resources Information Center

    Cole, Lucy; Fraser, Ruth

    The Columbia County Public Library (Lake City, Florida) conducted a project that involved recruitment, retention, public awareness, training, basic literacy, collection development, tutoring, computer- assisted, other technology, intergenerational/family, and English as a Second Language (ESL) programs. The project served a community of…

  17. The Role of Agent Age and Gender for Middle-Grade Girls

    ERIC Educational Resources Information Center

    Kim, Yanghee

    2016-01-01

    Compared to boys, many girls are more aware of a social context in the learning process and perform better when the environment supports frequent interactions and social relationships. For these girls, embodied agents (animated on-screen characters acting as tutors) could afford simulated social interactions in computer-based learning and thereby…

  18. Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aimone, James Bradley; Betty, Rita

    Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information - Sandia researchers developed novel methods and metrics for studying the computational function of neurogenesis, thus generating substantial impact to the neuroscience and neural computing communities. This work could benefit applications in machine learning and other analysis activities.

  19. Templet Web: the use of volunteer computing approach in PaaS-style cloud

    NASA Astrophysics Data System (ADS)

    Vostokin, Sergei; Artamonov, Yuriy; Tsarev, Daniil

    2018-03-01

    This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a) the implementation of "on-demand" access; (b) source code deployment management; (c) high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.

  20. Research | Computational Science | NREL

    Science.gov Websites

    Research Research NREL's computational science experts use advanced high-performance computing (HPC technologies, thereby accelerating the transformation of our nation's energy system. Enabling High-Impact Research NREL's computational science capabilities enable high-impact research. Some recent examples

  1. Generic Divide and Conquer Internet-Based Computing

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J. (Technical Monitor); Radenski, Atanas

    2003-01-01

    The growth of Internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of Peer to Peer (P2P) software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high-performance computing applications community. The general goal of this project is to achieve better understanding of the transition to Internet-based high-performance computing and to develop solutions for some of the technical challenges of this transition. In particular, we are interested in creating long-term motivation for end users to provide their idle processor time to support computationally intensive tasks. We believe that a practical P2P architecture should provide useful service to both clients with high-performance computing needs and contributors of lower-end computing resources. To achieve this, we are designing dual -service architecture for P2P high-performance divide-and conquer computing; we are also experimenting with a prototype implementation. Our proposed architecture incorporates a master server, utilizes dual satellite servers, and operates on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. A dual satellite server comprises a high-performance computing engine and a lower-end contributor service engine. The computing engine provides generic support for divide and conquer computations. The service engine is intended to provide free useful HTTP-based services to contributors of lower-end computing resources. Our proposed architecture is complementary to and accessible from computational grids, such as Globus, Legion, and Condor. Grids provide remote access to existing higher-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end Internet nodes. Our project is focused on a generic divide and conquer paradigm and on mobile applications of this paradigm that can operate on a loose and ever changing pool of lower-end Internet nodes.

  2. High Performance Computing Software Applications for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  3. The Effect of Potassium Impurities Deliberately Introduced into Activated Carbon Cathodes on the Performance of Lithium-Oxygen Batteries

    DOE PAGES

    Zhai, Dengyun; Lau, Kah Chun; Wang, Hsien-Hau; ...

    2015-12-02

    Rechargeable lithium-air (Li-O 2) batteries have drawn much interest owing to their high energy density. We report on the effect of deliberately introducing potassium impurities into the cathode material on the electrochemical performance of a Li-O 2 battery. Small amounts of potassium introduced into the activated carbon (AC) cathode material in the synthesis process are found to have a dramatic effect on the performance of the Li-O 2 cell. An increased amount of potassium significantly increases capacity, cycle life, and round-trip efficiency. This improved performance is probably due to a larger amount of LiO 2 in the discharge product, whichmore » is a mixture of LiO 2 and Li 2O 2, resulting from the increase in the amount of potassium present. No substantial correlation with porosity or surface area in an AC cathode is found. Lastly, experimental and computational studies indicate that potassium can act as an oxygen reduction catalyst, which can account for the dependence of performance on the amount of potassium.« less

  4. Dynamic Target Match Signals in Perirhinal Cortex Can Be Explained by Instantaneous Computations That Act on Dynamic Input from Inferotemporal Cortex

    PubMed Central

    Pagan, Marino

    2014-01-01

    Finding sought objects requires the brain to combine visual and target signals to determine when a target is in view. To investigate how the brain implements these computations, we recorded neural responses in inferotemporal cortex (IT) and perirhinal cortex (PRH) as macaque monkeys performed a delayed-match-to-sample target search task. Our data suggest that visual and target signals were combined within or before IT in the ventral visual pathway and then passed onto PRH, where they were reformatted into a more explicit target match signal over ∼10–15 ms. Accounting for these dynamics in PRH did not require proposing dynamic computations within PRH itself but, rather, could be attributed to instantaneous PRH computations performed upon an input representation from IT that changed with time. We found that the dynamics of the IT representation arose from two commonly observed features: individual IT neurons whose response preferences were not simply rescaled with time and variable response latencies across the population. Our results demonstrate that these types of time-varying responses have important consequences for downstream computation and suggest that dynamic representations can arise within a feedforward framework as a consequence of instantaneous computations performed upon time-varying inputs. PMID:25122904

  5. Secure Multi-party Computation Protocol for Defense Applications in Military Operations Using Virtual Cryptography

    NASA Astrophysics Data System (ADS)

    Pathak, Rohit; Joshi, Satyadhar

    With the advent into the 20th century whole world has been facing the common dilemma of Terrorism. The suicide attacks on US twin towers 11 Sept. 2001, Train bombings in Madrid Spain 11 Mar. 2004, London bombings 7 Jul. 2005 and Mumbai attack 26 Nov. 2008 were some of the most disturbing, destructive and evil acts by terrorists in the last decade which has clearly shown their evil intent that they can go to any extent to accomplish their goals. Many terrorist organizations such as al Quaida, Harakat ul-Mujahidin, Hezbollah, Jaish-e-Mohammed, Lashkar-e-Toiba, etc. are carrying out training camps and terrorist operations which are accompanied with latest technology and high tech arsenal. To counter such terrorism our military is in need of advanced defense technology. One of the major issues of concern is secure communication. It has to be made sure that communication between different military forces is secure so that critical information is not leaked to the adversary. Military forces need secure communication to shield their confidential data from terrorist forces. Leakage of concerned data can prove hazardous, thus preservation and security is of prime importance. There may be a need to perform computations that require data from many military forces, but in some cases the associated forces would not want to reveal their data to other forces. In such situations Secure Multi-party Computations find their application. In this paper, we propose a new highly scalable Secure Multi-party Computation (SMC) protocol and algorithm for Defense applications which can be used to perform computation on encrypted data. Every party encrypts their data in accordance with a particular scheme. This encrypted data is distributed among some created virtual parties. These Virtual parties send their data to the TTP through an Anonymizer layer. TTP performs computation on encrypted data and announces the result. As the data sent was encrypted its actual value can’t be known by TTP and with the use of Anonymizers we have covered the identity of true source of data. Modifier tokens are generated along encryption of data which are distributed among virtual parties, then sent to TTP and finally used in the computation. Thus without revealing the data, right result can be computed and privacy of the parties is maintained. We have also given a probabilistic security analysis of hacking the protocol and shown how zero hacking security can be achieved.

  6. System Resource Allocations | High-Performance Computing | NREL

    Science.gov Websites

    Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation

  7. Achieving High Performance with FPGA-Based Computing

    PubMed Central

    Herbordt, Martin C.; VanCourt, Tom; Gu, Yongfeng; Sukhwani, Bharat; Conti, Al; Model, Josh; DiSabello, Doug

    2011-01-01

    Numerous application areas, including bioinformatics and computational biology, demand increasing amounts of processing capability. In many cases, the computation cores and data types are suited to field-programmable gate arrays. The challenge is identifying the design techniques that can extract high performance potential from the FPGA fabric. PMID:21603088

  8. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  9. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  10. Expanding the Scope of High-Performance Computing Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uram, Thomas D.; Papka, Michael E.

    The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.

  11. Development of a HIPAA-compliant environment for translational research data and analytics.

    PubMed

    Bradford, Wayne; Hurdle, John F; LaSalle, Bernie; Facelli, Julio C

    2014-01-01

    High-performance computing centers (HPC) traditionally have far less restrictive privacy management policies than those encountered in healthcare. We show how an HPC can be re-engineered to accommodate clinical data while retaining its utility in computationally intensive tasks such as data mining, machine learning, and statistics. We also discuss deploying protected virtual machines. A critical planning step was to engage the university's information security operations and the information security and privacy office. Access to the environment requires a double authentication mechanism. The first level of authentication requires access to the university's virtual private network and the second requires that the users be listed in the HPC network information service directory. The physical hardware resides in a data center with controlled room access. All employees of the HPC and its users take the university's local Health Insurance Portability and Accountability Act training series. In the first 3 years, researcher count has increased from 6 to 58.

  12. HRA Aerospace Challenges

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2013-01-01

    Compared to equipment designed to perform the same function over and over, humans are just not as reliable. Computers and machines perform the same action in the same way repeatedly getting the same result, unless equipment fails or a human interferes. Humans who are supposed to perform the same actions repeatedly often perform them incorrectly due to a variety of issues including: stress, fatigue, illness, lack of training, distraction, acting at the wrong time, not acting when they should, not following procedures, misinterpreting information or inattention to detail. Why not use robots and automatic controls exclusively if human error is so common? In an emergency or off normal situation that the computer, robotic element, or automatic control system is not designed to respond to, the result is failure unless a human can intervene. The human in the loop may be more likely to cause an error, but is also more likely to catch the error and correct it. When it comes to unexpected situations, or performing multiple tasks outside the defined mission parameters, humans are the only viable alternative. Human Reliability Assessments (HRA) identifies ways to improve human performance and reliability and can lead to improvements in systems designed to interact with humans. Understanding the context of the situation that can lead to human errors, which include taking the wrong action, no action or making bad decisions provides additional information to mitigate risks. With improved human reliability comes reduced risk for the overall operation or project.

  13. From High School to the Future: ACT Preparation--Too Much, Too Late. Why ACT Scores Are Low in Chicago and What It Means for Schools

    ERIC Educational Resources Information Center

    Allensworth, Elaine; Correa, Macarena; Ponisciak, Steve

    2008-01-01

    The majority of Chicago Public Schools (CPS) students are not attaining the ACT scores they are aiming for, which they need to qualify for scholarships and college acceptance. This report looks at the reasons behind students' low performance and what matters for doing well on this test. CPS students are highly motivated to do well on the ACT, and…

  14. GPU-based High-Performance Computing for Radiation Therapy

    PubMed Central

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B.

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. Graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past a few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of studies have been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this article, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. PMID:24486639

  15. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  16. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  17. 78 FR 29748 - Privacy Act of 1974; Computer Matching Program Between the Department of Education and the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-21

    ... DEPARTMENT OF EDUCATION Privacy Act of 1974; Computer Matching Program Between the Department of... document provides notice of the continuation of a computer matching program between the Department of... 5301, the Department of Justice and the Department of Education implemented a computer matching program...

  18. 75 FR 53005 - Privacy Act of 1974, as amended; Notice of Computer Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-30

    ... notice of its renewal of an ongoing computer-matching program with the Social Security Administration... computer-matching program with the Committee on Homeland Security and Governmental Affairs of the Senate... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as amended; Notice of Computer Matching Program...

  19. 78 FR 34678 - Privacy Act of 1974, as Amended; Notice of Computer Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... notice of its renewal of an ongoing computer-matching program with the Social Security Administration... computer-matching program with the Committee on Homeland Security and Governmental Affairs of the Senate... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as Amended; Notice of Computer Matching Program...

  20. 77 FR 74020 - Office of Child Support Enforcement; Privacy Act of 1974; Computer Matching Agreement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-12

    ... 29, 2012, sent a report of a Computer Matching Program to the Committee on Homeland Security and... Support Enforcement; Privacy Act of 1974; Computer Matching Agreement AGENCY: Office of Child Support Enforcement (OCSE), ACF, HHS. ACTION: Notice of a Computer Matching Program. SUMMARY: In accordance with the...

  1. 13 CFR 102.40 - Computer matching.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 102.40 Computer...) Matching agreements. SBA will comply with the Computer Matching and Privacy Protection Act of 1988 (5 U.S.C... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Computer matching. 102.40 Section...

  2. 13 CFR 102.40 - Computer matching.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 102.40 Computer...) Matching agreements. SBA will comply with the Computer Matching and Privacy Protection Act of 1988 (5 U.S.C... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Computer matching. 102.40 Section...

  3. 13 CFR 102.40 - Computer matching.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 102.40 Computer...) Matching agreements. SBA will comply with the Computer Matching and Privacy Protection Act of 1988 (5 U.S.C... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Computer matching. 102.40 Section...

  4. 13 CFR 102.40 - Computer matching.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 102.40 Computer...) Matching agreements. SBA will comply with the Computer Matching and Privacy Protection Act of 1988 (5 U.S.C... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Computer matching. 102.40 Section...

  5. 13 CFR 102.40 - Computer matching.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 102.40 Computer...) Matching agreements. SBA will comply with the Computer Matching and Privacy Protection Act of 1988 (5 U.S.C... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Computer matching. 102.40 Section...

  6. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  7. DoD High Performance Computing Modernization Program Users Group Conference (HPCMP UGC 2011) Held in Portland, Oregon on June 20-23, 2011

    DTIC Science & Technology

    2011-06-01

    4. Conclusion The Web -based AGeS system described in this paper is a computationally-efficient and scalable system for high- throughput genome...method for protecting web services involves making them more resilient to attack using autonomic computing techniques. This paper presents our initial...20–23, 2011 2011 DoD High Performance Computing Modernzation Program Users Group Conference HPCMP UGC 2011 The papers in this book comprise the

  8. Space Communications Artificial Intelligence for Link Evaluation Terminal (SCAILET)

    NASA Technical Reports Server (NTRS)

    Shahidi, Anoosh

    1991-01-01

    A software application to assis end-users of the Link Evaluation Terminal (LET) for satellite communication is being developed. This software application incorporates artificial intelligence (AI) techniques and will be deployed as an interface to LET. The high burst rate (HBR) LET provides 30 GHz transmitting/20 GHz receiving, 220/110 Mbps capability for wideband communications technology experiments with the Advanced Communications Technology Satellite (ACTS). The HBR LET and ACTS are being developed at the NASA Lewis Research Center. The HBR LET can monitor and evaluate the integrity of the HBR communications uplink and downlink to the ACTS satellite. The uplink HBR transmission is performed by bursting the bit-pattern as a modulated signal to the satellite. By comparing the transmitted bit pattern with the received bit pattern, HBR LET can determine the bit error rate BER) under various atmospheric conditions. An algorithm for power augmentation is applied to enhance the system's BER performance at reduced signal strength caused by adverse conditions. Programming scripts, defined by the design engineer, set up the HBR LET terminal by programming subsystem devices through IEEE488 interfaces. However, the scripts are difficult to use, require a steep learning curve, are cryptic, and are hard to maintain. The combination of the learning curve and the complexities involved with editing the script files may discourage end-users from utilizing the full capabilities of the HBR LET system. An intelligent assistant component of SCAILET that addresses critical end-user needs in the programming of the HBR LET system as anticipated by its developers is described. A close look is taken at the various steps involved in writing ECM software for a C&P, computer and at how the intelligent assistant improves the HBR LET system and enhances the end-user's ability to perform the experiments.

  9. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  10. High Performance Computing (HPC)-Enabled Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation

    DTIC Science & Technology

    2016-11-01

    Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation by Kathryn Esham, Luis Bravo, Anindya Ghoshal, Muthuvel Murugan, and Michael...Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation by Luis Bravo, Anindya Ghoshal, Muthuvel...High Performance Computing (HPC)-Enabled Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation 5a

  11. 40 CFR 124.20 - Computation of time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... scheduled to begin on the occurrence of an act or event shall begin on the day after the act or event. (b) Any time period scheduled to begin before the occurrence of an act or event shall be computed so that the period ends on the day before the act or event. (c) If the final day of any time period falls on a...

  12. 40 CFR 124.20 - Computation of time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... scheduled to begin on the occurrence of an act or event shall begin on the day after the act or event. (b) Any time period scheduled to begin before the occurrence of an act or event shall be computed so that the period ends on the day before the act or event. (c) If the final day of any time period falls on a...

  13. 40 CFR 124.20 - Computation of time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... scheduled to begin on the occurrence of an act or event shall begin on the day after the act or event. (b) Any time period scheduled to begin before the occurrence of an act or event shall be computed so that the period ends on the day before the act or event. (c) If the final day of any time period falls on a...

  14. 40 CFR 124.20 - Computation of time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... scheduled to begin on the occurrence of an act or event shall begin on the day after the act or event. (b) Any time period scheduled to begin before the occurrence of an act or event shall be computed so that the period ends on the day before the act or event. (c) If the final day of any time period falls on a...

  15. 40 CFR 124.20 - Computation of time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... scheduled to begin on the occurrence of an act or event shall begin on the day after the act or event. (b) Any time period scheduled to begin before the occurrence of an act or event shall be computed so that the period ends on the day before the act or event. (c) If the final day of any time period falls on a...

  16. A Research and Development Strategy for High Performance Computing.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    This report is the result of a systematic review of the status and directions of high performance computing and its relationship to federal research and development. Conducted by the Federal Coordinating Council for Science, Engineering, and Technology (FCCSET), the review involved a series of workshops attended by numerous computer scientists and…

  17. DCL System Using Deep Learning Approaches for Land-Based or Ship-Based Real Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2015-09-30

    Clark (2014), "Using High Performance Computing to Explore Large Complex Bioacoustic Soundscapes : Case Study for Right Whale Acoustics," Procedia...34Using High Performance Computing to Explore Large Complex Bioacoustic Soundscapes : Case Study for Right Whale Acoustics," Procedia Computer Science 20

  18. Information transmission over an amplitude damping channel with an arbitrary degree of memory

    NASA Astrophysics Data System (ADS)

    D'Arrigo, Antonio; Benenti, Giuliano; Falci, Giuseppe; Macchiavello, Chiara

    2015-12-01

    We study the performance of a partially correlated amplitude damping channel acting on two qubits. We derive lower bounds for the single-shot classical capacity by studying two kinds of quantum ensembles, one which allows us to maximize the Holevo quantity for the memoryless channel and the other allowing the same task but for the full-memory channel. In these two cases we also show the amount of entanglement which is involved in achieving the maximum of the Holevo quantity. For the single-shot quantum capacity we discuss both a lower and an upper bound, achieving a good estimate for high values of the channel transmissivity. We finally compute the entanglement-assisted classical channel capacity.

  19. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  20. P2P Technology for High-Performance Computing: An Overview

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J. (Technical Monitor); Berry, Jason

    2003-01-01

    The transition from cluster computing to peer-to-peer (P2P) high-performance computing has recently attracted the attention of the computer science community. It has been recognized that existing local networks and dedicated clusters of headless workstations can serve as inexpensive yet powerful virtual supercomputers. It has also been recognized that the vast number of lower-end computers connected to the Internet stay idle for as long as 90% of the time. The growing speed of Internet connections and the high availability of free CPU time encourage exploration of the possibility to use the whole Internet rather than local clusters as massively parallel yet almost freely available P2P supercomputer. As a part of a larger project on P2P high-performance computing, it has been my goal to compile an overview of the 2P2 paradigm. I have studied various P2P platforms and I have compiled systematic brief descriptions of their most important characteristics. I have also experimented and obtained hands-on experience with selected P2P platforms focusing on those that seem promising with respect to P2P high-performance computing. I have also compiled relevant literature and web references. I have prepared a draft technical report and I have summarized my findings in a poster paper.

  1. Pilot trial of spirometer games for airway clearance practice in cystic fibrosis.

    PubMed

    Bingham, Peter M; Lahiri, Thomas; Ashikaga, Taka

    2012-08-01

    Many children with cystic fibrosis (CF) adhere poorly to airway clearance techniques (ACTs), and would rather play video games that challenge their dexterity and visual tracking skills. We developed gaming technology that encourages forced expiratory maneuvers. Following interviews regarding recreational activities and subjects' practice of ACTs, we conducted a pilot trial of spirometer games in 13 adolescents with CF, to test the hypothesis that games could increase subjects' engagement with forced expiratory breathing maneuvers and improve pulmonary function tests (PFTs). After baseline PFTs, subjects were provided with digital spirometers and computers set up as "game only" or "control" devices. After the first of 2 periods (each > 2 weeks), the computer was set-up for the alternate condition for period 2. The t test and non-parametric correlation analyses examined use, number of expiratory high flow events (HFEs), and change in PFTs, identifying trends at P ≤ .1, significance at P < .05. Interviews disclosed minimal awareness of ACTs among our pediatric CF patients. Subjects used games and control software a similar percentage of days during the game (26%) and control periods (32%). There was a trend toward more minutes with the game versus control setup (P = .07), though HFE count did not differ between the 2 conditions (P = .71). Game play showed no overall effect on FEV(1), though correlation analysis showed a modest relation between minutes of play and change in FEV(1) from baseline (r = 0.50, P = .09). The game period showed a trend to increased vital capacity (P = .05). Spirometer games elicit forced expiratory breath maneuvers in pediatric CF patients. Improvement in PFTs may be due to improved test performance technique, though improved obstructive/restrictive lung function due to game play cannot be excluded. A formal clinical trial of this approach is planned.

  2. Distributed Accounting on the Grid

    NASA Technical Reports Server (NTRS)

    Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.

    2001-01-01

    By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.

  3. Evaluating the Efficacy of the Cloud for Cluster Computation

    NASA Technical Reports Server (NTRS)

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  4. 78 FR 49525 - Privacy Act of 1974; CMS Computer Match No. 2013-06; HHS Computer Match No. 1308

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... Care Act of 2010 (Pub. L. 111-148), as amended by the Health Care and Education Reconciliation Act of..., 2009). INCLUSIVE DATES OF THE MATCH: The CMP will become effective no sooner than 40 days after the...

  5. 76 FR 12984 - Privacy Act of 1974; Notice of a Computer Matching Program Between HUD and the United States...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... of a Computer Matching Program Between HUD and the United States Department of Veterans Affairs (VA) AGENCY: Office of the Chief Information Officer, HUD. ACTION: Notice of a computer matching program... the Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), and the Office of...

  6. 78 FR 71591 - Privacy Act of 1974; Computer Matching Program between the U.S. Department of Education (ED) and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-29

    ... DEPARTMENT OF EDUCATION Privacy Act of 1974; Computer Matching Program between the U.S. Department.... ACTION: Notice. SUMMARY: Notice is hereby given of the renewal of the computer matching program between... (VA) (source agency). After the ED and VA Data Integrity Boards approve a new computer matching...

  7. 78 FR 29786 - Computer Matching and Privacy Protection Act of 1988; Report of Matching Program: RRB and State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-21

    ... RAILROAD RETIREMENT BOARD Computer Matching and Privacy Protection Act of 1988; Report of Matching...: Notice of a renewal of an existing computer matching program due to expire on May 24, 2013. SUMMARY: As... of its intent to renew an ongoing computer matching program. In this match, we provide certain...

  8. 76 FR 50460 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-15

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD). ACTION: Notice of a Computer Matching Program. SUMMARY: Subsection (e)(12) of the Privacy Act of 1974, as amended, (5 U.S.C. 552a) requires agencies to publish advance notice of any proposed or revised computer...

  9. 76 FR 77811 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-14

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD). ACTION: Notice of a Computer Matching Program. SUMMARY: Subsection (e)(12) of the Privacy Act of 1974, as amended, (5 U.S.C. 552a) requires agencies to publish advance notice of any proposed or revised computer...

  10. National Test Bed Security and Communications Architecture Working Group Report

    DTIC Science & Technology

    1992-04-01

    computer systems via a physical medium. Most of those physical media are tappable or interceptable. This means that all the data that flows across the...provides the capability for NTBN nodes to support users operating in differing COIs to share the computing resources and communication media and for...representation. Again generally speaking, the NTBN must act as the high-speed, wide-bandwidth communications media that would provide the "near real-time

  11. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    NASA Astrophysics Data System (ADS)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.

  12. Running Jobs on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    on the Peregrine high-performance computing (HPC) system. Running Different Types of Jobs Batch jobs scheduling policies - queue names, limits, etc. Requesting different node types Sample batch scripts

  13. Structure and sequence based functional annotation of Zika virus NS2b protein: Computational insights.

    PubMed

    Aguilera-Pesantes, Daniel; Méndez, Miguel A

    2017-10-28

    While Zika virus (ZIKV) outbreaks are a growing concern for global health, a deep understanding about the virus is lacking. Here we report a contribution to the basic science on the virus- a detailed computational analysis of the non structural protein NS2b. This protein acts as a cofactor for the NS3 protease (NS3Pro) domain that is important on the viral life cycle, and is an interesting target for drug development. We found that ZIKV NS2b cofactor is highly similar to other virus within the Flavivirus genus, especially to West Nile Virus, suggesting that it is completely necessary for the protease complex activity. Furthermore, the ZIKV NS2b has an important role to the function and stability of the ZIKV NS3 protease domain even when presents a low conservation score. In addition, ZIKV NS2b is mostly rigid, which could imply a non dynamic nature in substrate recognition. Finally, by performing a computational alanine scanning mutagenesis, we found that residues Gly 52 and Asp 83 in the NS2b could be important in substrate recognition. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. High-performance scientific computing in the cloud

    NASA Astrophysics Data System (ADS)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  15. Marin County Free Library, Final Performance Report for Library Services and Construction Act (LSCA) Title VI, Library Literacy Program.

    ERIC Educational Resources Information Center

    Mooney, Sharon Lopez

    The West Marin Literacy Project, a project of the Marin County Free Library (San Rafael, California), involved recruitment, retention, coalition building, public awareness, training, rural oriented, tutoring, computer- assisted, intergenerational/family, and English as a Second Language (ESL) programs. The project served a community of under…

  16. Martinsburg-Berkeley County Public Library, Final Performance Report for Library Services and Construction Act (LSCA) Title VI, Library Literacy Program.

    ERIC Educational Resources Information Center

    Hess, Therese M.

    The Martinsburg-Berkeley County Public Library (West Virginia) conducted a project that involved recruitment, retention, coalition building, public awareness, training, basic literacy, collection development, tutoring, computer assisted, other technology, and English as a Second Language (ESL) programs. The project served a three-county community…

  17. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  18. A computational model of spatial visualization capacity.

    PubMed

    Lyon, Don R; Gunzelmann, Glenn; Gluck, Kevin A

    2008-09-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to perform it. In this model, developed within the Adaptive Control of Thought-Rational (ACT-R) architecture, visualization capacity is limited by three mechanisms. Two of these (associative interference and decay) are longstanding characteristics of ACT-R's declarative memory. A third (spatial interference) is a new mechanism motivated by spatial proximity effects in our data. We tested the model in two experiments, one with parameter-value fitting, and a replication without further fitting. Correspondence between model and data was close in both experiments, suggesting that the model may be useful for understanding why visualizing new, complex spatial material is so difficult.

  19. Hadoop-MCC: Efficient Multiple Compound Comparison Algorithm Using Hadoop.

    PubMed

    Hua, Guan-Jie; Hung, Che-Lun; Tang, Chuan Yi

    2018-01-01

    In the past decade, the drug design technologies have been improved enormously. The computer-aided drug design (CADD) has played an important role in analysis and prediction in drug development, which makes the procedure more economical and efficient. However, computation with big data, such as ZINC containing more than 60 million compounds data and GDB-13 with more than 930 million small molecules, is a noticeable issue of time-consuming problem. Therefore, we propose a novel heterogeneous high performance computing method, named as Hadoop-MCC, integrating Hadoop and GPU, to copy with big chemical structure data efficiently. Hadoop-MCC gains the high availability and fault tolerance from Hadoop, as Hadoop is used to scatter input data to GPU devices and gather the results from GPU devices. Hadoop framework adopts mapper/reducer computation model. In the proposed method, mappers response for fetching SMILES data segments and perform LINGO method on GPU, then reducers collect all comparison results produced by mappers. Due to the high availability of Hadoop, all of LINGO computational jobs on mappers can be completed, even if some of the mappers encounter problems. A comparison of LINGO is performed on each the GPU device in parallel. According to the experimental results, the proposed method on multiple GPU devices can achieve better computational performance than the CUDA-MCC on a single GPU device. Hadoop-MCC is able to achieve scalability, high availability, and fault tolerance granted by Hadoop, and high performance as well by integrating computational power of both of Hadoop and GPU. It has been shown that using the heterogeneous architecture as Hadoop-MCC effectively can enhance better computational performance than on a single GPU device. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  20. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  1. 78 FR 15734 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary [Docket No. DHS-2013-0010] Privacy Act of 1974; Computer Matching Program AGENCY: Department of Homeland Security/U.S. Citizenship and... computer matching program between the Department of Homeland Security/U.S. Citizenship and Immigration...

  2. 78 FR 15733 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary [Docket No. DHS-2013-0008] Privacy Act of 1974; Computer Matching Program AGENCY: Department of Homeland Security/U.S. Citizenship and... computer matching program between the Department of Homeland Security/U.S. Citizenship and Immigration...

  3. 78 FR 32711 - Privacy Act of 1974: Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-31

    ... DEPARTMENT OF VETERANS AFFAIRS Privacy Act of 1974: Computer Matching Program AGENCY: Department of Veterans Affairs. ACTION: Notice. SUMMARY: The Department of Veterans Affairs (VA) provides notice that it intends to conduct a recurring computer-matching program matching Internal Revenue Service (IRS...

  4. The Chemistry Development Kit (CDK) v2.0: atom typing, depiction, molecular formulas, and substructure searching.

    PubMed

    Willighagen, Egon L; Mayfield, John W; Alvarsson, Jonathan; Berg, Arvid; Carlsson, Lars; Jeliazkova, Nina; Kuhn, Stefan; Pluskal, Tomáš; Rojas-Chertó, Miquel; Spjuth, Ola; Torrance, Gilleain; Evelo, Chris T; Guha, Rajarshi; Steinbeck, Christoph

    2017-06-06

    The Chemistry Development Kit (CDK) is a widely used open source cheminformatics toolkit, providing data structures to represent chemical concepts along with methods to manipulate such structures and perform computations on them. The library implements a wide variety of cheminformatics algorithms ranging from chemical structure canonicalization to molecular descriptor calculations and pharmacophore perception. It is used in drug discovery, metabolomics, and toxicology. Over the last 10 years, the code base has grown significantly, however, resulting in many complex interdependencies among components and poor performance of many algorithms. We report improvements to the CDK v2.0 since the v1.2 release series, specifically addressing the increased functional complexity and poor performance. We first summarize the addition of new functionality, such atom typing and molecular formula handling, and improvement to existing functionality that has led to significantly better performance for substructure searching, molecular fingerprints, and rendering of molecules. Second, we outline how the CDK has evolved with respect to quality control and the approaches we have adopted to ensure stability, including a code review mechanism. This paper highlights our continued efforts to provide a community driven, open source cheminformatics library, and shows that such collaborative projects can thrive over extended periods of time, resulting in a high-quality and performant library. By taking advantage of community support and contributions, we show that an open source cheminformatics project can act as a peer reviewed publishing platform for scientific computing software. Graphical abstract CDK 2.0 provides new features and improved performance.

  5. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  6. GRAPE project

    NASA Astrophysics Data System (ADS)

    Makino, Junichiro

    2002-12-01

    We overview our GRAvity PipE (GRAPE) project to develop special-purpose computers for astrophysical N-body simulations. The basic idea of GRAPE is to attach a custom-build computer dedicated to the calculation of gravitational interaction between particles to a general-purpose programmable computer. By this hybrid architecture, we can achieve both a wide range of applications and very high peak performance. Our newest machine, GRAPE-6, achieved the peak speed of 32 Tflops, and sustained performance of 11.55 Tflops, for the total budget of about 4 million USD. We also discuss relative advantages of special-purpose and general-purpose computers and the future of high-performance computing for science and technology.

  7. 75 FR 16171 - Privacy Act of 1974; Notice of Modification of Existing Computer Matching Program Between the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-31

    ... program for the purpose of income verifications and computer matching. DATES: Effective Date: The... additional verification to identify inappropriate (excess or insufficient) rental assistance, and perhaps... Act, the Native American Housing Assistance and Self-Determination Act of 1996, and the Quality...

  8. Development and validation of spray models for investigating diesel engine combustion and emissions

    NASA Astrophysics Data System (ADS)

    Som, Sibendu

    Diesel engines intrinsically generate NOx and particulate matter which need to be reduced significantly in order to comply with the increasingly stringent regulations worldwide. This motivates the diesel engine manufacturers to gain fundamental understanding of the spray and combustion processes so as to optimize these processes and reduce engine emissions. Strategies being investigated to reduce engine's raw emissions include advancements in fuel injection systems, efficient nozzle orifice design, injection and combustion control strategies, exhaust gas recirculation, use of alternative fuels such as biodiesel etc. This thesis explores several of these approaches (such as nozzle orifice design, injection control strategy, and biodiesel use) by performing computer modeling of diesel engine processes. Fuel atomization characteristics are known to have a significant effect on the combustion and emission processes in diesel engines. Primary fuel atomization is induced by aerodynamics in the near nozzle region as well as cavitation and turbulence from the injector nozzle. The breakup models that are currently used in diesel engine simulations generally consider aerodynamically induced breakup using the Kelvin-Helmholtz (KH) instability model, but do not account for inner nozzle flow effects. An improved primary breakup (KH-ACT) model incorporating cavitation and turbulence effects along with aerodynamically induced breakup is developed and incorporated in the computational fluid dynamics code CONVERGE. The spray simulations using KH-ACT model are "quasi-dynamically" coupled with inner nozzle flow (using FLUENT) computations. This presents a novel tool to capture the influence of inner nozzle flow effects such as cavitation and turbulence on spray, combustion, and emission processes. Extensive validation is performed against the non-evaporating spray data from Argonne National Laboratory. Performance of the KH and KH-ACT models is compared against the evaporating and combusting data from Sandia National Laboratory. The KH-ACT model is observed to provide better predictions for spray dispersion, axial velocity decay, sauter mean diameter, and liquid and lift-off length interplay which is attributed to the enhanced primary breakup predicted by this model. In addition, experimentally observed trends with changing nozzle conicity could only be captured by the KH-ACT model. Results further indicate that the combustion under diesel engine conditions is characterized by a double-flame structure with a rich premixed reaction zone near the flame stabilization region and a non-premixed reaction zone further downstream. Finally, the differences in inner nozzle flow and spray characteristics of petrodiesel and biodiesel are quantified. The improved modeling capability developed in this work can be used for extensive diesel engine simulations to further optimize injection, spray, combustion, and emission processes.

  9. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascalemore » computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.« less

  10. Development of a small-scale computer cluster

    NASA Astrophysics Data System (ADS)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonachea, D.; Dickens, P.; Thakur, R.

    There is a growing interest in using Java as the language for developing high-performance computing applications. To be successful in the high-performance computing domain, however, Java must not only be able to provide high computational performance, but also high-performance I/O. In this paper, we first examine several approaches that attempt to provide high-performance I/O in Java - many of which are not obvious at first glance - and evaluate their performance on two parallel machines, the IBM SP and the SGI Origin2000. We then propose extensions to the Java I/O library that address the deficiencies in the Java I/O APImore » and improve performance dramatically. The extensions add bulk (array) I/O operations to Java, thereby removing much of the overhead currently associated with array I/O in Java. We have implemented the extensions in two ways: in a standard JVM using the Java Native Interface (JNI) and in a high-performance parallel dialect of Java called Titanium. We describe the two implementations and present performance results that demonstrate the benefits of the proposed extensions.« less

  12. Workload Characterization of CFD Applications Using Partial Differential Equation Solvers

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.

  13. Tough high performance composite matrix

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor); Johnston, Norman J. (Inventor)

    1994-01-01

    This invention is a semi-interpentrating polymer network which includes a high performance thermosetting polyimide having a nadic end group acting as a crosslinking site and a high performance linear thermoplastic polyimide. Provided is an improved high temperature matrix resin which is capable of performing in the 200 to 300 C range. This resin has significantly improved toughness and microcracking resistance, excellent processability, mechanical performance, and moisture and solvent resistances.

  14. On the "Exchangeability" of Hands-On and Computer-Simulated Science Performance Assessments. CSE Technical Report.

    ERIC Educational Resources Information Center

    Rosenquist, Anders; Shavelson, Richard J.; Ruiz-Primo, Maria Araceli

    Inconsistencies in scores from computer-simulated and "hands-on" science performance assessments have led to questions about the exchangeability of these two methods in spite of the highly touted potential of computer-simulated performance assessment. This investigation considered possible explanations for students' inconsistent performances: (1)…

  15. Benefits of computer screen-based simulation in learning cardiac arrest procedures.

    PubMed

    Bonnetain, Elodie; Boucheix, Jean-Michel; Hamet, Maël; Freysz, Marc

    2010-07-01

    What is the best way to train medical students early so that they acquire basic skills in cardiopulmonary resuscitation as effectively as possible? Studies have shown the benefits of high-fidelity patient simulators, but have also demonstrated their limits. New computer screen-based multimedia simulators have fewer constraints than high-fidelity patient simulators. In this area, as yet, there has been no research on the effectiveness of transfer of learning from a computer screen-based simulator to more realistic situations such as those encountered with high-fidelity patient simulators. We tested the benefits of learning cardiac arrest procedures using a multimedia computer screen-based simulator in 28 Year 2 medical students. Just before the end of the traditional resuscitation course, we compared two groups. An experiment group (EG) was first asked to learn to perform the appropriate procedures in a cardiac arrest scenario (CA1) in the computer screen-based learning environment and was then tested on a high-fidelity patient simulator in another cardiac arrest simulation (CA2). While the EG was learning to perform CA1 procedures in the computer screen-based learning environment, a control group (CG) actively continued to learn cardiac arrest procedures using practical exercises in a traditional class environment. Both groups were given the same amount of practice, exercises and trials. The CG was then also tested on the high-fidelity patient simulator for CA2, after which it was asked to perform CA1 using the computer screen-based simulator. Performances with both simulators were scored on a precise 23-point scale. On the test on a high-fidelity patient simulator, the EG trained with a multimedia computer screen-based simulator performed significantly better than the CG trained with traditional exercises and practice (16.21 versus 11.13 of 23 possible points, respectively; p<0.001). Computer screen-based simulation appears to be effective in preparing learners to use high-fidelity patient simulators, which present simulations that are closer to real-life situations.

  16. The Atacama Cosmology Telescope: High-Resolution Sunyaev-Zel'dovich Array Observations of ACT SZE-Selected Clusters from the Equatorial Strip

    NASA Technical Reports Server (NTRS)

    Reese, Erik D.; Mroczkowski, Tony; Menanteau, Felipe; Hilton, Matt; Sievers, Jonathan; Aguirre, Paula; Appel, John William; Baker, Andrew J.; Bond, J. Richard; Das, Sudeep; hide

    2011-01-01

    We present follow-up observations with the Sunyaev-Zel'dovich Array (SZA) of optically-confirmed galaxy clusters found in the equatorial survey region of the Atacama Cosmology Telescope (ACT): ACT-CL J0022-0036, ACT-CL J2051+0057, and ACT-CL J2337+0016. ACT-CL J0022-0036 is a newly-discovered, massive (10(exp 15) Msun), high-redshift (z=0.81) cluster revealed by ACT through the Sunyaev-Zel'dovich effect (SZE). Deep, targeted observations with the SZA allow us to probe a broader range of cluster spatial scales, better disentangle cluster decrements from radio point source emission, and derive more robust integrated SZE flux and mass estimates than we can with ACT data alone. For the two clusters we detect with the SZA we compute integrated SZE signal and derive masses from the SZA data only. ACT-CL J2337+0016, also known as Abell 2631, has archival Chandra data that allow an additional X-ray-based mass estimate. Optical richness is also used to estimate cluster masses and shows good agreement with the SZE and X-ray-based estimates. Based on the point sources detected by the SZA in these three cluster fields and an extrapolation to ACT's frequency, we estimate that point sources could be contaminating the SZE decrement at the less than = 20% level for some fraction of clusters.

  17. The Atacama Cosmology Telescope: High-Resolution Sunyaev-Zeldovich Array Observations of ACT SZE-Selected Clusters from the Equatorial Strip

    NASA Technical Reports Server (NTRS)

    Reese, Erik; Mroczkowski, Tony; Menateau, Felipe; Hilton, Matt; Sievers, Jonathan; Aguirre, Paula; Appel, John William; Baker, Andrew J.; Bond, J. Richard; Das, Sudeep; hide

    2011-01-01

    We present follow-up observations with the Sunyaev-Zel'dovich Array (SZA) of optically-confirmed galaxy clusters found in the equatorial survey region of the Atacama Cosmology Telescope (ACT): ACT-CL J0022-0036, ACT-CL J2051+0057, and ACT-CL J2337+0016. ACT-CL J0022-0036 is a newly-discovered, massive ( approximately equals 10(exp 15) Solar M), high-redshift (z = 0.81) cluster revealed by ACT through the Sunyaev-Zeldovich effect (SZE). Deep, targeted observations with the SZA allow us to probe a broader range of cluster spatial scales, better disentangle cluster decrements from radio point source emission, and derive more robust integrated SZE flux and mass estimates than we can with ACT data alone. For the two clusters we detect with the SZA we compute integrated SZE signal and derive masses from the SZA data only. ACT-CL J2337+0016, also known as Abell 2631, has archival Chandra data that allow an additional X-ray-based mass estimate. Optical richness is also used to estimate cluster masses and shows good agreement with the SZE and X-ray-based estimates. Based on the point sources detected by the SZA in these three cluster fields and an extrapolation to ACT's frequency, we estimate that point sources could be contaminating the SZE decrement at the approx < 20% level for some fraction of clusters.

  18. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  19. 78 FR 38724 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-27

    ... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary [Docket No. DHS-2013-0006] Privacy Act of 1974; Computer Matching Program AGENCY: Department of Homeland Security/U.S. Citizenship and... Agreement that establishes a computer matching program between the Department of Homeland Security/U.S...

  20. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  1. Embedded Streaming Deep Neural Networks Accelerator With Applications.

    PubMed

    Dundar, Aysegul; Jin, Jonghoon; Martini, Berin; Culurciello, Eugenio

    2017-07-01

    Deep convolutional neural networks (DCNNs) have become a very powerful tool in visual perception. DCNNs have applications in autonomous robots, security systems, mobile phones, and automobiles, where high throughput of the feedforward evaluation phase and power efficiency are important. Because of this increased usage, many field-programmable gate array (FPGA)-based accelerators have been proposed. In this paper, we present an optimized streaming method for DCNNs' hardware accelerator on an embedded platform. The streaming method acts as a compiler, transforming a high-level representation of DCNNs into operation codes to execute applications in a hardware accelerator. The proposed method utilizes maximum computational resources available based on a novel-scheduled routing topology that combines data reuse and data concatenation. It is tested with a hardware accelerator implemented on the Xilinx Kintex-7 XC7K325T FPGA. The system fully explores weight-level and node-level parallelizations of DCNNs and achieves a peak performance of 247 G-ops while consuming less than 4 W of power. We test our system with applications on object classification and object detection in real-world scenarios. Our results indicate high-performance efficiency, outperforming all other presented platforms while running these applications.

  2. Computational estimation of the influence of the main body-to-iliac limb length ratio on the displacement forces acting on an aortic endograft. Theoretical application to Bolton Treovance® Abdominal Stent-Graft.

    PubMed

    Georgakarakos, E; Xenakis, A; Georgiadis, G S; Argyriou, C; Manopoulos, C; Tsangaris, S; Lazarides, M K

    2014-10-01

    The influence of the relative iliac limb length of an endograft (EG) on the displacements forces (DF) predisposing to adverse effects are under-appreciated in the literature. Therefore, we conducted a computational study to estimate the magnitude of the DF acting over an entire reconstructed EG and its counterparts for a range of main body-to-iliac limb length (L1/L2) ratios. A customary bifurcated 3D model was computationally created and meshed using the commercially available ANSYS ICEM (Ansys Inc., Canonsburg, PA, USA) software. Accordingly, Fluid Structure Interaction was used to estimate the DF. The total length of the EG was kept constant, while the L1/L2 ratio ranged from 0.3 to 1.5. The increase in L1/L2 slightly affected the DF on the EG (ranging from 3.8 to 4.1 N) and its bifurcation (4.0 to 4.6 N). However, the forces exerted at the iliac sites were strongly affected by the L1/L2 values (ranging from 0.9 to 2.2 N), showing a parabolic pattern with a minimum for 0.6 ratio. It is suggested that the hemodynamic effect of the relative limb lengths should not be considered negligible. A high main body-to-iliac limb length ratio seems to favor hemodynamically a low bifurcation but it attenuates the main body-iliac limbs modular stability. Further clinical studies should investigate the relevant value of these findings. The Bolton Treovance(®) device is presented as a representative, improved stent-graft design that takes into account these hemodynamic parameters in order to achieve a promising, improved clinical performance.

  3. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.

  4. Data Recording Room in the 10-by 10-Foot Supersonic Wind Tunnel

    NASA Image and Video Library

    1973-04-21

    The test data recording equipment located in the office building of the 10-by 10-Foot Supersonic Wind Tunnel at the NASA Lewis Research Center. The data system was the state of the art when the facility began operating in 1955 and was upgraded over time. NASA engineers used solenoid valves to measure pressures from different locations within the test section. Up 48 measurements could be fed into a single transducer. The 10-by 10 data recorders could handle up to 200 data channels at once. The Central Automatic Digital Data Encoder (CADDE) converted this direct current raw data from the test section into digital format on magnetic tape. The digital information was sent to the Lewis Central Computer Facility for additional processing. It could also be displayed in the control room via strip charts or oscillographs. The 16-by 56-foot long ERA 1103 UNIVAC mainframe computer processed most of the digital data. The paper tape with the raw data was fed into the ERA 1103 which performed the needed calculations. The information was then sent back to the control room. There was a lag of several minutes before the computed information was available, but it was exponentially faster than the hand calculations performed by the female computers. The 10- by 10-foot tunnel, which had its official opening in May 1956, was built under the Congressional Unitary Plan Act which coordinated wind tunnel construction at the NACA, Air Force, industry, and universities. The 10- by 10 was the largest of the three NACA tunnels built under the act.

  5. 40 CFR 270.215 - How are time periods in the requirements in this subpart and my RAP computed?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the occurrence of an act or event must begin on the day after the act or event. (For example, if your... before the occurrence of an act or event must be computed so that the period ends on the day before the act or event. (For example, if you are transferring ownership or operational control of your site, and...

  6. 40 CFR 270.215 - How are time periods in the requirements in this subpart and my RAP computed?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the occurrence of an act or event must begin on the day after the act or event. (For example, if your... before the occurrence of an act or event must be computed so that the period ends on the day before the act or event. (For example, if you are transferring ownership or operational control of your site, and...

  7. 40 CFR 270.215 - How are time periods in the requirements in this subpart and my RAP computed?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the occurrence of an act or event must begin on the day after the act or event. (For example, if your... before the occurrence of an act or event must be computed so that the period ends on the day before the act or event. (For example, if you are transferring ownership or operational control of your site, and...

  8. 40 CFR 270.215 - How are time periods in the requirements in this subpart and my RAP computed?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the occurrence of an act or event must begin on the day after the act or event. (For example, if your... before the occurrence of an act or event must be computed so that the period ends on the day before the act or event. (For example, if you are transferring ownership or operational control of your site, and...

  9. 40 CFR 270.215 - How are time periods in the requirements in this subpart and my RAP computed?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the occurrence of an act or event must begin on the day after the act or event. (For example, if your... before the occurrence of an act or event must be computed so that the period ends on the day before the act or event. (For example, if you are transferring ownership or operational control of your site, and...

  10. Welcome to the NASA High Performance Computing and Communications Computational Aerosciences (CAS) Workshop 2000

    NASA Technical Reports Server (NTRS)

    Schulbach, Catherine H. (Editor)

    2000-01-01

    The purpose of the CAS workshop is to bring together NASA's scientists and engineers and their counterparts in industry, other government agencies, and academia working in the Computational Aerosciences and related fields. This workshop is part of the technology transfer plan of the NASA High Performance Computing and Communications (HPCC) Program. Specific objectives of the CAS workshop are to: (1) communicate the goals and objectives of HPCC and CAS, (2) promote and disseminate CAS technology within the appropriate technical communities, including NASA, industry, academia, and other government labs, (3) help promote synergy among CAS and other HPCC scientists, and (4) permit feedback from peer researchers on issues facing High Performance Computing in general and the CAS project in particular. This year we had a number of exciting presentations in the traditional aeronautics, aerospace sciences, and high-end computing areas and in the less familiar (to many of us affiliated with CAS) earth science, space science, and revolutionary computing areas. Presentations of more than 40 high quality papers were organized into ten sessions and presented over the three-day workshop. The proceedings are organized here for easy access: by author, title and topic.

  11. Evolution of a designless nanoparticle network into reconfigurable Boolean logic

    NASA Astrophysics Data System (ADS)

    Bose, S. K.; Lawrence, C. P.; Liu, Z.; Makarenko, K. S.; van Damme, R. M. J.; Broersma, H. J.; van der Wiel, W. G.

    2015-12-01

    Natural computers exploit the emergent properties and massive parallelism of interconnected networks of locally active components. Evolution has resulted in systems that compute quickly and that use energy efficiently, utilizing whatever physical properties are exploitable. Man-made computers, on the other hand, are based on circuits of functional units that follow given design rules. Hence, potentially exploitable physical processes, such as capacitive crosstalk, to solve a problem are left out. Until now, designless nanoscale networks of inanimate matter that exhibit robust computational functionality had not been realized. Here we artificially evolve the electrical properties of a disordered nanomaterials system (by optimizing the values of control voltages using a genetic algorithm) to perform computational tasks reconfigurably. We exploit the rich behaviour that emerges from interconnected metal nanoparticles, which act as strongly nonlinear single-electron transistors, and find that this nanoscale architecture can be configured in situ into any Boolean logic gate. This universal, reconfigurable gate would require about ten transistors in a conventional circuit. Our system meets the criteria for the physical realization of (cellular) neural networks: universality (arbitrary Boolean functions), compactness, robustness and evolvability, which implies scalability to perform more advanced tasks. Our evolutionary approach works around device-to-device variations and the accompanying uncertainties in performance. Moreover, it bears a great potential for more energy-efficient computation, and for solving problems that are very hard to tackle in conventional architectures.

  12. The High-Performance Computing and Communications program, the national information infrastructure and health care.

    PubMed Central

    Lindberg, D A; Humphreys, B L

    1995-01-01

    The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116

  13. A cross-sectional study of the effects of load carriage on running characteristics and tibial mechanical stress: implications for stress fracture injuries in women

    DTIC Science & Technology

    2017-03-23

    performance computing resources made available by the US Department of Defense High Performance Computing Modernization Program at the Air Force...1Department of Defense Biotechnology High Performance Computing Software Applications Institute, Telemedicine and Advanced Technology Research Center, United...States Army Medical Research and Materiel Command, Fort Detrick, Maryland, USA Full list of author information is available at the end of the article

  14. Computer-aided cognitive rehabilitation improves cognitive performances and induces brain functional connectivity changes in relapsing remitting multiple sclerosis patients: an exploratory study.

    PubMed

    Bonavita, S; Sacco, R; Della Corte, M; Esposito, S; Sparaco, M; d'Ambrosio, A; Docimo, R; Bisecco, A; Lavorgna, L; Corbo, D; Cirillo, S; Gallo, A; Esposito, F; Tedeschi, G

    2015-01-01

    To better understand the effects of short-term computer-based cognitive rehabilitation (cCR) on cognitive performances and default mode network (DMN) intrinsic functional connectivity (FC) in cognitively impaired relapsing remitting (RR) multiple sclerosis (MS) patients. Eighteen cognitively impaired RRMS patients underwent neuropsychological evaluation by the Rao's brief repeatable battery and resting-state functional magnetic resonance imaging to evaluate FC of the DMN before and after a short-term (8 weeks, twice a week) cCR. A control group of 14 cognitively impaired RRMS patients was assigned to an aspecific cognitive training (aCT), and underwent the same study protocol. Correlations between DMN and cognitive performances were also tested. After cCR, there was a significant improvement of the following tests: SDMT (p < 0.01), PASAT 3" (p < 0.00), PASAT 2" (p < 0.03), SRT-D (p < 0.02), and 10/36 SPART-D (p < 0.04); as well as a significant increase of the FC of the DMN in the posterior cingulate cortex (PCC) and bilateral inferior parietal cortex (IPC). After cCR, a significant negative correlation between Stroop Color-Word Interference Test and FC in the PCC emerged. After aCT, the control group did not show any significant effect either on FC or neuropsychological tests. No significant differences were found in brain volumes and lesion load in both groups when comparing data acquired at baseline and after cCR or aCT. In cognitively impaired RRMS patients, cCR improves cognitive performances (i.e., processing speed and visual and verbal sustained memory), and increases FC in the PCC and IPC of the DMN. This exploratory study suggests that cCR may induce adaptive cortical reorganization favoring better cognitive performances, thus strengthening the value of cognitive exercise in the general perspective of building either cognitive or brain reserve.

  15. 20 CFR 225.22 - Employee RIB PIA used in survivor annuities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... This PIA is computed in accordance with section 215 of the Social Security Act using the deceased... RETIREMENT ACT PRIMARY INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Survivor Annuities and the... Employee Retirement Insurance Benefit PIA (Employee RIB PIA) is used to compute the employee RIB amount...

  16. 75 FR 43579 - Privacy Act of 1974; Computer Matching Program Between the Office of Personnel Management and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-26

    ... safeguards for disclosure of Social Security benefit information to OPM via direct computer link for the... OFFICE OF PERSONNEL MANAGEMENT Privacy Act of 1974; Computer Matching Program Between the Office of Personnel Management and Social Security Administration AGENCY: Office of Personnel Management...

  17. 78 FR 3474 - Privacy Act of 1974; Computer Matching Program Between the Office Of Personnel Management and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-16

    ... Security benefit information to OPM via direct computer link for the administration of certain programs by... OFFICE OF PERSONNEL MANAGEMENT Privacy Act of 1974; Computer Matching Program Between the Office Of Personnel Management and Social Security Administration AGENCY: Office of Personnel Management...

  18. 75 FR 68396 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Department of Labor (DOL))-Match...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-05

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2010-0052] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Department of Labor (DOL))--Match Number 1003 AGENCY: Social Security... as shown above. SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection...

  19. 78 FR 12127 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Department of the Treasury...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-21

    ... 1310 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer..., as shown above. SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2013-0007] Privacy Act of 1974, as Amended...

  20. 75 FR 51154 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Department of the Treasury...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ... 1310 AGENCY: Social Security Administration (SSA) ACTION: Notice of a renewal of an existing computer..., as shown above. SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2010-0035] Privacy Act of 1974, as Amended...

  1. 78 FR 69926 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Centers for Medicare & Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-21

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2013-0059] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare & Medicaid Services (CMS))--Match Number 1076 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching...

  2. 76 FR 21091 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Centers for Medicare & Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-14

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2011-0022] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare & Medicaid Services (CMS))--Match Number 1076 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching...

  3. 40 CFR 72.11 - Computation of time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... event shall begin on the day the act or event occurs. (b) Unless otherwise stated, any time period scheduled, under the Acid Rain Program, to begin before the occurrence of an act or event shall be computed so that the period ends on the day before the act or event occurs. (c) Unless otherwise stated, if...

  4. 40 CFR 72.11 - Computation of time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... event shall begin on the day the act or event occurs. (b) Unless otherwise stated, any time period scheduled, under the Acid Rain Program, to begin before the occurrence of an act or event shall be computed so that the period ends on the day before the act or event occurs. (c) Unless otherwise stated, if...

  5. 40 CFR 72.11 - Computation of time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... event shall begin on the day the act or event occurs. (b) Unless otherwise stated, any time period scheduled, under the Acid Rain Program, to begin before the occurrence of an act or event shall be computed so that the period ends on the day before the act or event occurs. (c) Unless otherwise stated, if...

  6. 40 CFR 72.11 - Computation of time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... event shall begin on the day the act or event occurs. (b) Unless otherwise stated, any time period scheduled, under the Acid Rain Program, to begin before the occurrence of an act or event shall be computed so that the period ends on the day before the act or event occurs. (c) Unless otherwise stated, if...

  7. 40 CFR 72.11 - Computation of time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... event shall begin on the day the act or event occurs. (b) Unless otherwise stated, any time period scheduled, under the Acid Rain Program, to begin before the occurrence of an act or event shall be computed so that the period ends on the day before the act or event occurs. (c) Unless otherwise stated, if...

  8. 20 CFR 234.20 - Computation of the employee's 1937 Act LSDP basic amount.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... compensation and section 209 of the Social Security Act for a definition of creditable wages.) Closing date... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computation of the employee's 1937 Act LSDP basic amount. 234.20 Section 234.20 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE...

  9. 76 FR 56781 - Privacy Act of 1974; Notice of a Computer Matching Program Between the Department of Housing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-14

    ... require additional verification to identify inappropriate or inaccurate rental assistance, and may provide... Affordable Housing Act, the Native American Housing Assistance and Self-Determination Act of 1996, and the... matching activities. The computer matching program will also provide for the verification of social...

  10. Release Fixed Heel Point (FHP) Accommodation Model Verification and Validation (V and V) Plan - Rev A

    DTIC Science & Technology

    2017-01-23

    5e. TASK NUMBER N/A 5f. WORK UNIT NUMBER N/A 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AND ADDRESS(ES) RDECOM-TARDEC-ACT Attn...occupant work space, central 90% of the Soldier population, encumbrance, posture and position, verification and validation, computer aided design...factors engineers could benefit by working with vehicle designers to perform virtual assessments in CAD when there is not enough time and/or funding to

  11. The Electronic and Computer Technician Vocational Education Incentive Grants Act. Hearing before the Subcommittee on Elementary, Secondary, and Vocational Education of the Committee on Education and Labor. House of Representatives, Ninety-Seventh Congress, Second Session (San Francisco, CA) on H.R. 5820.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Education and Labor.

    This report documents a hearing to amend the Vocational Education Act of 1963 to make incentive grants to the states for electronic and computer technician vocational education programs. The discucssion focused on the Electronic and Computer Technician Education Incentive Grants Act. Testimony included prepared statements, letters, and…

  12. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  13. In Silico Identification and Pharmacological Evaluation of Novel Endocrine Disrupting Chemicals That Act via the Ligand-Binding Domain of the Estrogen Receptor α

    PubMed Central

    Kufareva, Irina; Abagyan, Ruben

    2014-01-01

    Endocrine disrupting chemicals (EDCs) pose a significant threat to human health, society, and the environment. Many EDCs elicit their toxic effects through nuclear hormone receptors, like the estrogen receptor α (ERα). In silico models can be used to prioritize chemicals for toxicological evaluation to reduce the amount of costly pharmacological testing and enable early alerts for newly designed compounds. However, many of the current computational models are overly dependent on the chemistry of known modulators and perform poorly for novel chemical scaffolds. Herein we describe the development of computational, three-dimensional multi-conformational pocket-field docking, and chemical-field docking models for the identification of novel EDCs that act via the ligand-binding domain of ERα. These models were highly accurate in the retrospective task of distinguishing known high-affinity ERα modulators from inactive or decoy molecules, with minimal training. To illustrate the utility of the models in prospective in silico compound screening, we screened a database of over 6000 environmental chemicals and evaluated the 24 top-ranked hits in an ERα transcriptional activation assay and a differential scanning fluorimetry-based ERα binding assay. Promisingly, six chemicals displayed ERα agonist activity (32nM–3.98μM) and two chemicals had moderately stabilizing effects on ERα. Two newly identified active compounds were chemically related β-adrenergic receptor (βAR) agonists, dobutamine, and ractopamine (a feed additive that promotes leanness in cattle and poultry), which are the first βAR agonists identified as activators of ERα-mediated gene transcription. This approach can be applied to other receptors implicated in endocrine disruption. PMID:24928891

  14. In silico identification and pharmacological evaluation of novel endocrine disrupting chemicals that act via the ligand-binding domain of the estrogen receptor α.

    PubMed

    McRobb, Fiona M; Kufareva, Irina; Abagyan, Ruben

    2014-09-01

    Endocrine disrupting chemicals (EDCs) pose a significant threat to human health, society, and the environment. Many EDCs elicit their toxic effects through nuclear hormone receptors, like the estrogen receptor α (ERα). In silico models can be used to prioritize chemicals for toxicological evaluation to reduce the amount of costly pharmacological testing and enable early alerts for newly designed compounds. However, many of the current computational models are overly dependent on the chemistry of known modulators and perform poorly for novel chemical scaffolds. Herein we describe the development of computational, three-dimensional multi-conformational pocket-field docking, and chemical-field docking models for the identification of novel EDCs that act via the ligand-binding domain of ERα. These models were highly accurate in the retrospective task of distinguishing known high-affinity ERα modulators from inactive or decoy molecules, with minimal training. To illustrate the utility of the models in prospective in silico compound screening, we screened a database of over 6000 environmental chemicals and evaluated the 24 top-ranked hits in an ERα transcriptional activation assay and a differential scanning fluorimetry-based ERα binding assay. Promisingly, six chemicals displayed ERα agonist activity (32nM-3.98μM) and two chemicals had moderately stabilizing effects on ERα. Two newly identified active compounds were chemically related β-adrenergic receptor (βAR) agonists, dobutamine, and ractopamine (a feed additive that promotes leanness in cattle and poultry), which are the first βAR agonists identified as activators of ERα-mediated gene transcription. This approach can be applied to other receptors implicated in endocrine disruption. © The Author 2014. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  15. Performance of High-Reliability Space-Qualified Processors Implementing Software Defined Radios

    DTIC Science & Technology

    2014-03-01

    ADDRESS(ES) AND ADDRESS(ES) Naval Postgraduate School, Department of Electrical and Computer Engineering, 833 Dyer Road, Monterey, CA 93943-5121 8...Chairman Jeffrey D. Paduan Electrical and Computer Engineering Dean of Research iii THIS PAGE...capability. Radiation in space poses a considerable threat to modern microelectronic devices, in particular to the high-performance low-cost computing

  16. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Pernice

    2010-09-01

    INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

  17. A tough high performance composite matrix

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor); Johnston, Norman J. (Inventor)

    1992-01-01

    This invention is a semi-interpenetrating polymer network which includes a high performance thermosetting polyimide having a nadic end group acting as a crosslinking site and a high performance linear thermoplastic polyimide. An improved high temperature matrix resin is provided which is capable of performing in the 200 to 300 C range. This resin has significantly improved toughness and microcracking resistance, excellent processability, mechanical performance and moisture and solvent resistances.

  18. The teamwork in assertive community treatment (TACT) scale: development and validation.

    PubMed

    Wholey, Douglas R; Zhu, Xi; Knoke, David; Shah, Pri; Zellmer-Bruhn, Mary; Witheridge, Thomas F

    2012-11-01

    Team design is meticulously specified for assertive community treatment (ACT) teams, yet performance can vary across ACT teams, even those with high fidelity. By developing and validating the Teamwork in Assertive Community Treatment (TACT) scale, investigators examined the role of team processes in ACT performance. The TACT scale measuring ACT teamwork was developed from a conceptual model grounded in organizational research and adapted for the ACT and mental health context. TACT subscales were constructed after exploratory and confirmatory factor analyses. The reliability, discriminant validity, predictive validity, temporal stability, internal consistency, and within-team agreement were established with surveys from approximately 300 members of 26 Minnesota ACT teams who completed the questionnaire three times, at six-month intervals. Nine TACT subscales emerged from the analyses: exploration, exploitation of new and existing knowledge, psychological safety, goal agreement, conflict, constructive controversy, information accessibility, encounter preparedness, and consumer-centered care. These nine subscales demonstrated fit and temporal stability (confirmatory factor analysis), high internal consistency (Cronbach's alpha), and within-team agreement and between-team differences (rwg and intraclass correlations). Correlational analyses of the subscales revealed that they measure related yet distinctive aspects of ACT team processes, and regression analyses demonstrated predictive validity (encounter preparedness is related to staff outcomes). The TACT scale demonstrated high reliability and validity and can be included in research and evaluation of teamwork in ACT and mental health teams.

  19. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potok, Thomas E; Schuman, Catherine D; Young, Steven R

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

  20. 75 FR 39575 - Privacy Act of 1974; Notice of a Computer Matching Program Between the Department of Housing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-09

    ... of a Computer Matching Program Between the Department of Housing and Urban Development (HUD) and the.... ACTION: Notice of a computer matching program between the HUD and the USDA. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 552a), as amended by the Computer Matching and Privacy Protection...

  1. 75 FR 67755 - Privacy Act of 1974; Notice of a Computer Matching Program Between the Department of Housing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-03

    ... of a Computer Matching Program Between the Department of Housing and Urban Development (HUD) and the.... ACTION: Notice of a computer matching program between the HUD and the SBA. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 552a), as amended by the Computer Matching and Privacy Protection...

  2. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  3. Tools for 3D scientific visualization in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.

  4. Optical high-performance computing: introduction to the JOSA A and Applied Optics feature.

    PubMed

    Caulfield, H John; Dolev, Shlomi; Green, William M J

    2009-08-01

    The feature issues in both Applied Optics and the Journal of the Optical Society of America A focus on topics of immediate relevance to the community working in the area of optical high-performance computing.

  5. High-Performance Java Codes for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  6. Turbulence modeling of free shear layers for high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Sondak, Douglas L.

    1993-01-01

    The High Performance Aircraft (HPA) Grand Challenge of the High Performance Computing and Communications (HPCC) program involves the computation of the flow over a high performance aircraft. A variety of free shear layers, including mixing layers over cavities, impinging jets, blown flaps, and exhaust plumes, may be encountered in such flowfields. Since these free shear layers are usually turbulent, appropriate turbulence models must be utilized in computations in order to accurately simulate these flow features. The HPCC program is relying heavily on parallel computers. A Navier-Stokes solver (POVERFLOW) utilizing the Baldwin-Lomax algebraic turbulence model was developed and tested on a 128-node Intel iPSC/860. Algebraic turbulence models run very fast, and give good results for many flowfields. For complex flowfields such as those mentioned above, however, they are often inadequate. It was therefore deemed that a two-equation turbulence model will be required for the HPA computations. The k-epsilon two-equation turbulence model was implemented on the Intel iPSC/860. Both the Chien low-Reynolds-number model and a generalized wall-function formulation were included.

  7. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  8. A High Performance Computing Approach to the Simulation of Fluid Solid Interaction Problems with Rigid and Flexible Components (Open Access Publisher’s Version)

    DTIC Science & Technology

    2014-08-01

    performance computing, smoothed particle hydrodynamics, rigid body dynamics, flexible body dynamics ARMAN PAZOUKI ∗, RADU SERBAN ∗, DAN NEGRUT ∗ A...HIGH PERFORMANCE COMPUTING APPROACH TO THE SIMULATION OF FLUID-SOLID INTERACTION PROBLEMS WITH RIGID AND FLEXIBLE COMPONENTS This work outlines a unified...are implemented to model rigid and flexible multibody dynamics. The two- way coupling of the fluid and solid phases is supported through use of

  9. Current state and future direction of computer systems at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  10. beta-Aminoalcohols as Potential Reactivators of Aged Sarin-/Soman-Inhibited Acetylcholinesterase

    DTIC Science & Technology

    2017-02-08

    This approach includes high - quality quantum mechanical/molecular mechanical calcula- tions, providing reliable reactivation steps and energetics...I. V. Khavrutskii Department of Defense Biotechnology High Performance Computing Software Applications Institute Telemedicine and Advanced...b] Dr. A. Wallqvist Department of Defense Biotechnology High Performance Computing Software Applications Institute Telemedicine and Advanced

  11. Business Models of High Performance Computing Centres in Higher Education in Europe

    ERIC Educational Resources Information Center

    Eurich, Markus; Calleja, Paul; Boutellier, Roman

    2013-01-01

    High performance computing (HPC) service centres are a vital part of the academic infrastructure of higher education organisations. However, despite their importance for research and the necessary high capital expenditures, business research on HPC service centres is mostly missing. From a business perspective, it is important to find an answer to…

  12. Charging Ahead into the Next Millennium: Proceedings of the Systems and Technology Symposium (20th) Held in Denver, Colorado on 7-10 June 1999

    DTIC Science & Technology

    1999-06-01

    Tactical Radar Correlator EV Electric Vehicle EW Electronic Warfare F ^^m F Frequency FA False Alarm FAO Foreign Area Officer FBE Fleet Battle... Electric Vehicle High Frequency Horsepower High-Performance Computing High Performance Computing and Communications High Performance Knowledge...A/D Analog-to-Digital A/G Air-to-Ground AAN Army After Next AAV Advanced Air Vehicle ABCCC Airborne Battlefield Command, Control and

  13. Institute for scientific computing research;fiscal year 1999 annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, D

    2000-03-28

    Large-scale scientific computation, and all of the disciplines that support it and help to validate it, have been placed at the focus of Lawrence Livermore National Laboratory by the Accelerated Strategic Computing Initiative (ASCI). The Laboratory operates the computer with the highest peak performance in the world and has undertaken some of the largest and most compute-intensive simulations ever performed. Computers at the architectural extremes, however, are notoriously difficult to use efficiently. Even such successes as the Laboratory's two Bell Prizes awarded in November 1999 only emphasize the need for much better ways of interacting with the results of large-scalemore » simulations. Advances in scientific computing research have, therefore, never been more vital to the core missions of the Laboratory than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, the Laboratory must engage researchers at many academic centers of excellence. In FY 1999, the Institute for Scientific Computing Research (ISCR) has expanded the Laboratory's bridge to the academic community in the form of collaborative subcontracts, visiting faculty, student internships, a workshop, and a very active seminar series. ISCR research participants are integrated almost seamlessly with the Laboratory's Center for Applied Scientific Computing (CASC), which, in turn, addresses computational challenges arising throughout the Laboratory. Administratively, the ISCR flourishes under the Laboratory's University Relations Program (URP). Together with the other four Institutes of the URP, it must navigate a course that allows the Laboratory to benefit from academic exchanges while preserving national security. Although FY 1999 brought more than its share of challenges to the operation of an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and well worth the continued effort. A change of administration for the ISCR occurred during FY 1999. Acting Director John Fitzgerald retired from LLNL in August after 35 years of service, including the last two at helm of the ISCR. David Keyes, who has been a regular visitor in conjunction with ASCI scalable algorithms research since October 1997, overlapped with John for three months and serves half-time as the new Acting Director.« less

  14. 77 FR 33547 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Centers for Medicare and Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-06

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2012-0015] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare and Medicaid Services (CMS))--Match Number 1094 AGENCY: Social Security Administration (SSA). ACTION: Notice of a new computer matching program that will expire...

  15. 32 CFR Appendix E to Part 806b - Privacy Impact Assessment

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Systems Development System Privacy. Rapid advancements in computer technology make it possible to store...-503, The Computer Matching and Privacy Act of 1988. 13 13 http://www.defenselink.mil/privacy/1975OMB_PAGuide/jun1989.pdf. (2) Public Law 100-235, The Computer Security Act of 1987, 14 which establishes...

  16. 20 CFR 225.23 - Combined Earnings PIA used in survivor annuities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... section 215 of the Social Security Act as in effect on December 31, 1974. It is computed using the... RAILROAD RETIREMENT ACT PRIMARY INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Survivor Annuities... annuities. The Combined Earnings PIA used in survivor annuities may be used in computing the tier II...

  17. 20 CFR 225.24 - SS Earnings PIA used in survivor annuities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Security Earnings PIA (SS Earnings PIA) used in survivor annuities may be used in computing the tier II... the Social Security Act as in effect on December 31, 1974. It is computed using the deceased employee... RETIREMENT ACT PRIMARY INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Survivor Annuities and the...

  18. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    ERIC Educational Resources Information Center

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  19. 29 CFR 783.43 - Computation of seaman's minimum wage.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS... STANDARDS ACT TO EMPLOYEES EMPLOYED AS SEAMEN Computation of Wages and Hours § 783.43 Computation of seaman... all hours on duty in such period at the hourly rate prescribed for employees newly covered by the Act...

  20. 12 CFR 792.19 - How does NCUA calculate the fees for processing my request?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED.... Searches may be done manually or by computer. Search does not include modification of an existing program... cost of operating the computer for computer searches for records. (c) NCUA will charge the following...

  1. 12 CFR 792.19 - How does NCUA calculate the fees for processing my request?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED.... Searches may be done manually or by computer. Search does not include modification of an existing program... cost of operating the computer for computer searches for records. (c) NCUA will charge the following...

  2. 12 CFR 792.19 - How does NCUA calculate the fees for processing my request?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED.... Searches may be done manually or by computer. Search does not include modification of an existing program... cost of operating the computer for computer searches for records. (c) NCUA will charge the following...

  3. 12 CFR 792.19 - How does NCUA calculate the fees for processing my request?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED.... Searches may be done manually or by computer. Search does not include modification of an existing program... cost of operating the computer for computer searches for records. (c) NCUA will charge the following...

  4. 77 FR 32709 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Department of Homeland Security...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-01

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2011-0089] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Department of Homeland Security (DHS))--Match Number 1010 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching program that...

  5. 12 CFR 792.19 - How does NCUA calculate the fees for processing my request?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED.... Searches may be done manually or by computer. Search does not include modification of an existing program... cost of operating the computer for computer searches for records. (c) NCUA will charge the following...

  6. 78 FR 37647 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Railroad Retirement Board (RRB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-21

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2013-0010] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Railroad Retirement Board (RRB))--Match Number 1006 AGENCY: Social Security Administration. ACTION: Notice of a renewal of an existing computer matching program that will expire on...

  7. 77 FR 6620 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/the States); Match 6000 and 6003

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-08

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2011-0102] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ the States); Match 6000 and 6003 AGENCY: Social Security Administration..., as shown above. SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection...

  8. 76 FR 12398 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Bureau of the Public Debt (BPD...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-07

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2010-0034] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Bureau of the Public Debt (BPD))--Match Number 1304 AGENCY: Social Security... as shown above. SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection...

  9. 32 CFR Appendix E to Part 806b - Privacy Impact Assessment

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Systems Development System Privacy. Rapid advancements in computer technology make it possible to store...-503, The Computer Matching and Privacy Act of 1988. 13 13 http://www.defenselink.mil/privacy/1975OMB_PAGuide/jun1989.pdf. (2) Public Law 100-235, The Computer Security Act of 1987, 14 which establishes...

  10. 32 CFR Appendix E to Part 806b - Privacy Impact Assessment

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Systems Development System Privacy. Rapid advancements in computer technology make it possible to store...-503, The Computer Matching and Privacy Act of 1988. 13 13 http://www.defenselink.mil/privacy/1975OMB_PAGuide/jun1989.pdf. (2) Public Law 100-235, The Computer Security Act of 1987, 14 which establishes...

  11. 32 CFR Appendix E to Part 806b - Privacy Impact Assessment

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Systems Development System Privacy. Rapid advancements in computer technology make it possible to store...-503, The Computer Matching and Privacy Act of 1988. 13 13 http://www.defenselink.mil/privacy/1975OMB_PAGuide/jun1989.pdf. (2) Public Law 100-235, The Computer Security Act of 1987, 14 which establishes...

  12. Visualization of Unsteady Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The current compute environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array and the J90 cluster) provide the required computation bandwidth for CFD calculations of transient problems. If we follow the traditional computational analysis steps for CFD (and we wish to construct an interactive visualizer) we need to be aware of the following: (1) Disk space requirements. A single snap-shot must contain at least the values (primitive variables) stored at the appropriate locations within the mesh. For most simple 3D Euler solvers that means 5 floating point words. Navier-Stokes solutions with turbulence models may contain 7 state-variables. (2) Disk speed vs. Computational speeds. The time required to read the complete solution of a saved time frame from disk is now longer than the compute time for a set number of iterations from an explicit solver. Depending, on the hardware and solver an iteration of an implicit code may also take less time than reading the solution from disk. If one examines the performance improvements in the last decade or two, it is easy to see that depending on disk performance (vs. CPU improvement) may not be the best method for enhancing interactivity. (3) Cluster and Parallel Machine I/O problems. Disk access time is much worse within current parallel machines and cluster of workstations that are acting in concert to solve a single problem. In this case we are not trying to read the volume of data, but are running the solver and the solver outputs the solution. These traditional network interfaces must be used for the file system. (4) Numerics of particle traces. Most visualization tools can work upon a single snap shot of the data but some visualization tools for transient problems require dealing with time.

  13. Adaptive Decision Aiding in Computer-Assisted Instruction: Adaptive Computerized Training System (ACTS).

    ERIC Educational Resources Information Center

    Hopf-Weichel, Rosemarie; And Others

    This report describes results of the first year of a three-year program to develop and evaluate a new Adaptive Computerized Training System (ACTS) for electronics maintenance training. (ACTS incorporates an adaptive computer program that learns the student's diagnostic and decision value structure, compares it to that of an expert, and adapts the…

  14. Compute Server Performance Results

    NASA Technical Reports Server (NTRS)

    Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,

  15. DURIP: High Performance Computing in Biomathematics Applications

    DTIC Science & Technology

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  16. High performance computing for advanced modeling and simulation of materials

    NASA Astrophysics Data System (ADS)

    Wang, Jue; Gao, Fei; Vazquez-Poletti, Jose Luis; Li, Jianjiang

    2017-02-01

    The First International Workshop on High Performance Computing for Advanced Modeling and Simulation of Materials (HPCMS2015) was held in Austin, Texas, USA, Nov. 18, 2015. HPCMS 2015 was organized by Computer Network Information Center (Chinese Academy of Sciences), University of Michigan, Universidad Complutense de Madrid, University of Science and Technology Beijing, Pittsburgh Supercomputing Center, China Institute of Atomic Energy, and Ames Laboratory.

  17. A Queue Simulation Tool for a High Performance Scientific Computing Center

    NASA Technical Reports Server (NTRS)

    Spear, Carrie; McGalliard, James

    2007-01-01

    The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.

  18. Generic Divide and Conquer Internet-Based Computing

    NASA Technical Reports Server (NTRS)

    Radenski, Atanas; Follen, Gregory J. (Technical Monitor)

    2001-01-01

    The rapid growth of internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of new, internet-oriented software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high -performance computing applications community. The general goal of this research project is to contribute to better understanding of the transition to internet-based high -performance computing and to develop solutions for some of the difficulties of this transition. More specifically, our goal is to design an architecture for generic divide and conquer internet-based computing, to develop a portable implementation of this architecture, to create an example library of high-performance divide-and-conquer computing agents that run on top of this architecture, and to evaluate the performance of these agents. We have been designing an architecture that incorporates a master task-pool server and utilizes satellite computational servers that operate on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. Our designed architecture is intended to be complementary to and accessible from computational grids such as Globus, Legion, and Condor. Grids provide remote access to existing high-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end internet nodes. Our project is focused on a generic divide-and-conquer paradigm and its applications that operate on a loose and ever changing pool of lower-end internet nodes.

  19. Dispersion compensation of fiber optic communication system with direct detection using artificial neural networks (ANNs)

    NASA Astrophysics Data System (ADS)

    Maghrabi, Mahmoud M. T.; Kumar, Shiva; Bakr, Mohamed H.

    2018-02-01

    This work introduces a powerful digital nonlinear feed-forward equalizer (NFFE), exploiting multilayer artificial neural network (ANN). It mitigates impairments of optical communication systems arising due to the nonlinearity introduced by direct photo-detection. In a direct detection system, the detection process is nonlinear due to the fact that the photo-current is proportional to the absolute square of the electric field intensity. The proposed equalizer provides the most efficient computational cost with high equalization performance. Its performance is comparable to the benchmark compensation performance achieved by maximum-likelihood sequence estimator. The equalizer trains an ANN to act as a nonlinear filter whose impulse response removes the intersymbol interference (ISI) distortions of the optical channel. Owing to the proposed extensive training of the equalizer, it achieves the ultimate performance limit of any feed-forward equalizer (FFE). The performance and efficiency of the equalizer is investigated by applying it to various practical short-reach fiber optic communication system scenarios. These scenarios are extracted from practical metro/media access networks and data center applications. The obtained results show that the ANN-NFFE compensates for the received BER degradation and significantly increases the tolerance to the chromatic dispersion distortion.

  20. Sound Processing Features for Speaker-Dependent and Phrase-Independent Emotion Recognition in Berlin Database

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Christos Nikolaos; Vovoli, Eftichia

    An emotion recognition framework based on sound processing could improve services in human-computer interaction. Various quantitative speech features obtained from sound processing of acting speech were tested, as to whether they are sufficient or not to discriminate between seven emotions. Multilayered perceptrons were trained to classify gender and emotions on the basis of a 24-input vector, which provide information about the prosody of the speaker over the entire sentence using statistics of sound features. Several experiments were performed and the results were presented analytically. Emotion recognition was successful when speakers and utterances were “known” to the classifier. However, severe misclassifications occurred during the utterance-independent framework. At least, the proposed feature vector achieved promising results for utterance-independent recognition of high- and low-arousal emotions.

  1. Combining Acceleration and Displacement Dependent Modal Frequency Responses Using an MSC/NASTRAN DMAP Alter

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.

    1996-01-01

    Solving for dynamic responses of free-free launch vehicle/spacecraft systems acted upon by buffeting winds is commonly performed throughout the aerospace industry. Due to the unpredictable nature of this wind loading event, these problems are typically solved using frequency response random analysis techniques. To generate dynamic responses for spacecraft with statically-indeterminate interfaces, spacecraft contractors prefer to develop models which have response transformation matrices developed for mode acceleration data recovery. This method transforms spacecraft boundary accelerations and displacements into internal responses. Unfortunately, standard MSC/NASTRAN modal frequency response solution sequences cannot be used to combine acceleration- and displacement-dependent responses required for spacecraft mode acceleration data recovery. External user-written computer codes can be used with MSC/NASTRAN output to perform such combinations, but these methods can be labor and computer resource intensive. Taking advantage of the analytical and computer resource efficiencies inherent within MS C/NASTRAN, a DMAP Alter has been developed to combine acceleration- and displacement-dependent modal frequency responses for performing spacecraft mode acceleration data recovery. The Alter has been used successfully to efficiently solve a common aerospace buffeting wind analysis.

  2. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  3. The medical science DMZ: a network design pattern for data-intensive medical science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Dart, Eli; Barnett, William

    We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations.High-end networking, packet-filter firewalls, network intrusion-detection systems.We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs.The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and networkmore » resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows.By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.« less

  4. The medical science DMZ: a network design pattern for data-intensive medical science.

    PubMed

    Peisert, Sean; Dart, Eli; Barnett, William; Balas, Edward; Cuff, James; Grossman, Robert L; Berman, Ari; Shankar, Anurag; Tierney, Brian

    2017-10-06

    We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. High-end networking, packet-filter firewalls, network intrusion-detection systems. We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs. The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows. By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  5. Evaluation of low wing-loading fuel conservative, short-haul transports

    NASA Technical Reports Server (NTRS)

    Pasley, L. H.; Waldeck, T. A.

    1976-01-01

    Fuel conservation that could be attained with two technology advancements, Q fan propulsion system and active control technology (ACT) was studied. Aircraft incorporating each technology were sized for a Federal Aviation Regulation (FAR) field length of 914 meters (3,000 feet), 148 passengers, and a 926 kilometer (500 nautical mile) mission. The cruise Mach number was .70 at 10100 meter (33,000 foot) altitude. The improvement resulting from application of the Q fan propulsion system was computed relative to an optimized fuel conservative transport design. The performance improvements resulting from application of ACT technology were relative to the optimized Q fan propulsion system configuration.

  6. NASA HPCC Technology for Aerospace Analysis and Design

    NASA Technical Reports Server (NTRS)

    Schulbach, Catherine H.

    1999-01-01

    The Computational Aerosciences (CAS) Project is part of NASA's High Performance Computing and Communications Program. Its primary goal is to accelerate the availability of high-performance computing technology to the US aerospace community-thus providing the US aerospace community with key tools necessary to reduce design cycle times and increase fidelity in order to improve safety, efficiency and capability of future aerospace vehicles. A complementary goal is to hasten the emergence of a viable commercial market within the aerospace community for the advantage of the domestic computer hardware and software industry. The CAS Project selects representative aerospace problems (especially design) and uses them to focus efforts on advancing aerospace algorithms and applications, systems software, and computing machinery to demonstrate vast improvements in system performance and capability over the life of the program. Recent demonstrations have served to assess the benefits of possible performance improvements while reducing the risk of adopting high-performance computing technology. This talk will discuss past accomplishments in providing technology to the aerospace community, present efforts, and future goals. For example, the times to do full combustor and compressor simulations (of aircraft engines) have been reduced by factors of 320:1 and 400:1 respectively. While this has enabled new capabilities in engine simulation, the goal of an overnight, dynamic, multi-disciplinary, 3-dimensional simulation of an aircraft engine is still years away and will require new generations of high-end technology.

  7. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  8. The Computer Software Rental Amendments Act of 1988. Hearing on S. 2727 before the Subcommittee on Patents, Copyrights and Trrademarks of the Committee on the Judiciary, United States Senate. One Hundredth Congress, Second Session (Provo, Utah, August 24, 1988).

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on the Judiciary.

    A statement by Senator Orrin G. Hatch opened the hearing on The Computer Software Rental Amendments Act of 1988, a bill which would amend title 17, United States Code, the Copyright Act, to protect certain computer programs. The text of the bill is then presented, followed by the statements of four witnesses: (1) Dr. Alan C. Ashton, president,…

  9. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  10. National research and education network

    NASA Technical Reports Server (NTRS)

    Villasenor, Tony

    1991-01-01

    Some goals of this network are as follows: Extend U.S. technological leadership in high performance computing and computer communications; Provide wide dissemination and application of the technologies both to the speed and the pace of innovation and to serve the national economy, national security, education, and the global environment; and Spur gains in the U.S. productivity and industrial competitiveness by making high performance computing and networking technologies an integral part of the design and production process. Strategies for achieving these goals are as follows: Support solutions to important scientific and technical challenges through a vigorous R and D effort; Reduce the uncertainties to industry for R and D and use of this technology through increased cooperation between government, industry, and universities and by the continued use of government and government funded facilities as a prototype user for early commercial HPCC products; and Support underlying research, network, and computational infrastructures on which U.S. high performance computing technology is based.

  11. Dynamic Discharge Arc Driver. [computerized simulation

    NASA Technical Reports Server (NTRS)

    Dannenberg, R. E.; Slapnicar, P. I.

    1975-01-01

    A computer program using nonlinear RLC circuit analysis was developed to accurately model the electrical discharge performance of the Ames 1-MJ energy storage and arc-driver system. Solutions of circuit parameters are compared with experimental circuit data and related to shock speed measurements. Computer analysis led to the concept of a Dynamic Discharge Arc Driver (DDAD) capable of increasing the range of operation of shock-driven facilities. Utilization of mass addition of the driver gas offers a unique means of improving driver performance. Mass addition acts to increase the arc resistance, which results in better electrical circuit damping with more efficient Joule heating, producing stronger shock waves. Preliminary tests resulted in an increase in shock Mach number from 34 to 39 in air at an initial pressure of 2.5 torr.

  12. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  13. "Speak Out. Act Up. Move Forward." Disobedience-Based Arts Education

    ERIC Educational Resources Information Center

    Kotin, Alison; Aguirre McGregor, Stella; Pellecchia, DeAnna; Schatz, Ingrid; Liu, Shaw Pong

    2013-01-01

    In this essay, Alison Kotin, Stella Aguirre McGregor, DeAnna Pellecchia, Ingrid Schatz, and Shaw Pong Liu reflect on their experiences working with public high school students to create "Speak Out. Act Up. Move Forward.," a performative response to current and historical acts of civil disobedience. The authors--a group of instructors…

  14. Accelerated Application Development: The ORNL Titan Experience

    DOE PAGES

    Joubert, Wayne; Archibald, Richard K.; Berrill, Mark A.; ...

    2015-05-09

    The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less

  15. Accelerated application development: The ORNL Titan experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joubert, Wayne; Archibald, Rick; Berrill, Mark

    2015-08-01

    The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less

  16. 29 CFR 778.313 - Computing overtime pay under the Act for employees compensated on task basis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Computing overtime pay under the Act for employees compensated on task basis. 778.313 Section 778.313 Labor Regulations Relating to Labor (Continued) WAGE AND... TO REGULATIONS OVERTIME COMPENSATION Special Problems âtaskâ Basis of Payment § 778.313 Computing...

  17. Decision making in healthy participants on the Iowa Gambling Task: new insights from an operant approach.

    PubMed

    Bull, Peter N; Tippett, Lynette J; Addis, Donna Rose

    2015-01-01

    The Iowa Gambling Task (IGT) has contributed greatly to the study of affective decision making. However, researchers have observed high inter-study and inter-individual variability in IGT performance in healthy participants, and many are classified as impaired using standard criteria. Additionally, while decision-making deficits are often attributed to atypical sensitivity to reward and/or punishment, the IGT lacks an integrated sensitivity measure. Adopting an operant perspective, two experiments were conducted to explore these issues. In Experiment 1, 50 healthy participants completed a 200-trial version of the IGT which otherwise closely emulated Bechara et al.'s (1999) original computer task. Group data for Trials 1-100 closely replicated Bechara et al.'s original findings of high net scores and preferences for advantageous decks, suggesting that implementations that depart significantly from Bechara's standard IGT contribute to inter-study variability. During Trials 101-200, mean net scores improved significantly and the percentage of participants meeting the "impaired" criterion was halved. An operant-style stability criterion applied to individual data revealed this was likely related to individual differences in learning rate. Experiment 2 used a novel operant card task-the Auckland Card Task (ACT)-to derive quantitative estimates of sensitivity using the generalized matching law. Relative to individuals who mastered the IGT, persistent poor performers on the IGT exhibited significantly lower sensitivity to magnitudes (but not frequencies) of rewards and punishers on the ACT. Overall, our findings demonstrate the utility of operant-style analysis of IGT data and the potential of applying operant concurrent-schedule procedures to the study of human decision making.

  18. Fermilab | Science at Fermilab | Experiments & Projects | Intensity

    Science.gov Websites

    Theory Computing High-performance Computing Grid Computing Networking Mass Storage Plan for the Future List Historic Results Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library Visual Media Services Timeline History High-Energy Physics Accelerator

  19. Computational nuclear quantum many-body problem: The UNEDF project

    NASA Astrophysics Data System (ADS)

    Bogner, S.; Bulgac, A.; Carlson, J.; Engel, J.; Fann, G.; Furnstahl, R. J.; Gandolfi, S.; Hagen, G.; Horoi, M.; Johnson, C.; Kortelainen, M.; Lusk, E.; Maris, P.; Nam, H.; Navratil, P.; Nazarewicz, W.; Ng, E.; Nobre, G. P. A.; Ormand, E.; Papenbrock, T.; Pei, J.; Pieper, S. C.; Quaglioni, S.; Roche, K. J.; Sarich, J.; Schunck, N.; Sosonkina, M.; Terasaki, J.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2013-10-01

    The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.

  20. The cognitive roles of behavioral variability: idiosyncratic acts as the foundation of identity and as transitional, preparatory, and confirmatory phases.

    PubMed

    Eilam, David

    2015-02-01

    Behavior in obsessive compulsive disorder (OCD), in habitual daily tasks, and in sport and cultural rituals is deconstructed into elemental acts and categorized into common acts, performed by all individuals completing a similar task, and idiosyncratic acts, not performed by all individuals. Never skipped, common acts establish the pragmatic part of motor tasks. Repetitive performance of a few common acts renders rituals a rigid form, whereby common acts may serve as memes for cultural transmission. While idiosyncratic acts are not pragmatically necessary for task completion, they fulfill important cognitive roles. They form a long preparatory phase in tasks that involve high stakes, and a long confirmatory phase in OCD rituals. Idiosyncratic acts also form transitional phases between motor tasks, and are involved in establishing identity and preserving the flexibility necessary for adapting to varying circumstances. Behavioral variability, as manifested in idiosyncrasy, thus does not seem to be a noise or by-product of motor activity, but an essential cognitive component that has been preserved in the evolution of behavioral patterns, similar to the genetic variability in biology. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. 78 FR 40541 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA)-Match Number 1014

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-05

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2013-0019] Privacy Act of 1974, as Amended; Computer Matching Program (SSA)--Match Number 1014 AGENCY: Social Security Administration (SSA). [[Page 40542

  2. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.

  3. Scout: high-performance heterogeneous computing made simple

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less

  4. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.

  5. Cooperative high-performance storage in the accelerated strategic computing initiative

    NASA Technical Reports Server (NTRS)

    Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark

    1996-01-01

    The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.

  6. Risk screening for ADHD in a college population: is there a relationship with academic performance?

    PubMed

    Burlison, Jonathan D; Dwyer, William O

    2013-01-01

    The present study examines the relationship between self-reported levels of ADHD and academic outcomes, as well as aptitude. A total of 523 college students took the Adult Self-Report Scale-Version 1.1 (ASRS-V1.1), and their scores were compared with course performance and ACT (American College Test) composite scores. The measure identified 70 students (13.4%) as being in the "highly likely" category for an ADHD diagnosis. Course exam and ACT scores for the 70 "highly likely" students were statistically identical to the remaining 453 students in the sample and the 77 students identified as "highly unlikely" as well. Only 4 of the "highly likely" 70 students were registered with the university's Office of Student Disability Services as having been diagnosed with ADHD. The ASRS-V1.1 failed to discriminate academic performance and aptitude differences between ADHD "highly likely" and "highly unlikely" individuals. The use of self-report screeners of ADHD is questioned in contexts relating ADHD to academic performance.

  7. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouchard, Kristofer E.; Aimone, James B.; Chun, Miyoung

    A lack of coherent plans to analyze, manage, and understand data threatens the various opportunities offered by new neuro-technologies. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.

  8. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination

    DOE PAGES

    Bouchard, Kristofer E.; Aimone, James B.; Chun, Miyoung; ...

    2016-11-01

    A lack of coherent plans to analyze, manage, and understand data threatens the various opportunities offered by new neuro-technologies. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.

  9. Medical applications for high-performance computers in SKIF-GRID network.

    PubMed

    Zhuchkov, Alexey; Tverdokhlebov, Nikolay

    2009-01-01

    The paper presents a set of software services for massive mammography image processing by using high-performance parallel computers of SKIF-family which are linked into a service-oriented grid-network. An experience of a prototype system implementation in two medical institutions is also described.

  10. Software Systems for High-performance Quantum Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; Britt, Keith A

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventionalmore » computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.« less

  11. HEALER: homomorphic computation of ExAct Logistic rEgRession for secure rare disease variants analysis in GWAS

    PubMed Central

    Wang, Shuang; Zhang, Yuchen; Dai, Wenrui; Lauter, Kristin; Kim, Miran; Tang, Yuzhe; Xiong, Hongkai; Jiang, Xiaoqian

    2016-01-01

    Motivation: Genome-wide association studies (GWAS) have been widely used in discovering the association between genotypes and phenotypes. Human genome data contain valuable but highly sensitive information. Unprotected disclosure of such information might put individual’s privacy at risk. It is important to protect human genome data. Exact logistic regression is a bias-reduction method based on a penalized likelihood to discover rare variants that are associated with disease susceptibility. We propose the HEALER framework to facilitate secure rare variants analysis with a small sample size. Results: We target at the algorithm design aiming at reducing the computational and storage costs to learn a homomorphic exact logistic regression model (i.e. evaluate P-values of coefficients), where the circuit depth is proportional to the logarithmic scale of data size. We evaluate the algorithm performance using rare Kawasaki Disease datasets. Availability and implementation: Download HEALER at http://research.ucsd-dbmi.org/HEALER/ Contact: shw070@ucsd.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26446135

  12. High-performance computing for airborne applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even thoughmore » the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.« less

  13. Modeling the Effects of Two Behavior Moderators in ACT-R

    DTIC Science & Technology

    2002-01-10

    11(2), 93-100. Kelsey, R. M., Blascovich , J ., Leitten, C. L., Schneider, T. R., Tomaka , J ., & Wiens, S. (2000). Cardiovascular reactivity and...Computer Studies, 55, 85-107. Tomaka , J ., Blascovich , J ., Kelsey, R. M., & Leitten, C. L. (1993). Subjective, physiological, and behavioral effects of...performance and more solution attempts than neutral situations (Kelsey, Blascovich , Leitten, Schneider, Tomaka , & Wiens, 2000; Quigley, Barret, & Weinstein

  14. NASA's Participation in the National Computational Grid

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)

    1998-01-01

    Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.

  15. Finding and defining the natural automata acting in living plants: Toward the synthetic biology for robotics and informatics in vivo.

    PubMed

    Kawano, Tomonori; Bouteau, François; Mancuso, Stefano

    2012-11-01

    The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed.

  16. Finding and defining the natural automata acting in living plants: Toward the synthetic biology for robotics and informatics in vivo

    PubMed Central

    Kawano, Tomonori; Bouteau, François; Mancuso, Stefano

    2012-01-01

    The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed. PMID:23336016

  17. Silicon photonics for high-performance interconnection networks

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr

    2011-12-01

    We assert in the course of this work that silicon photonics has the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems, and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. This work showcases that chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, enable unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of this work, we demonstrate such feasibility of waveguides, modulators, switches, and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. Furthermore, we leverage the unique properties of available silicon photonic materials to create novel silicon photonic devices, subsystems, network topologies, and architectures to enable unprecedented performance of these photonic interconnection networks and computing systems. We show that the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. Furthermore, we explore the immense potential of all-optical functionalities implemented using parametric processing in the silicon platform, demonstrating unique methods that have the ability to revolutionize computation and communication. Silicon photonics enables new sets of opportunities that we can leverage for performance gains, as well as new sets of challenges that we must solve. Leveraging its inherent compatibility with standard fabrication techniques of the semiconductor industry, combined with its capability of dense integration with advanced microelectronics, silicon photonics also offers a clear path toward commercialization through low-cost mass-volume production. Combining empirical validations of feasibility, demonstrations of massive performance gains in large-scale systems, and the potential for commercial penetration of silicon photonics, the impact of this work will become evident in the many decades that follow.

  18. High performance computing and communications: Advancing the frontiers of information technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental inmore » the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.« less

  19. Expected orbit determination performance for the TOPEX/Poseidon mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nerem, R.S.; Putney, B.H.; Marshall, J.A.

    1993-03-01

    The TOPEX/Poseidon (T/P) mission, launched during the summer of 1992, has the requirement that the radial component of its orbit must be computed to an accuracy of 13 cm root-mean-square (rms) or better, allowing measurements of the sea surface height to be computed to similar accuracy when the satellite height is differenced with the altimeter measurements. This will be done by combining precise satellite tracking measurements with precise models of the forces acting on the satellite. The Space Geodesy Branch at Goddard Space Flight Center (GSFC), as part of the T/P precision orbit determination (POD) Team, has the responsibility withinmore » NASA for the T/P precise orbit computations. The prelaunch activities of the T/P POD Team have been mainly directed towards developing improved models of the static and time-varying gravitational forces acting on T/P and precise models for the non-conservative forces perturbing the orbit of T/P such as atmospheric drag, solar and Earth radiation pressure, and thermal imbalances. The radial orbit error budget for T/P allows 10 cm rms error due to gravity field mismodeling, 3 cm due to solid Earth and ocean tides, 6 cm due to radiative forces, and 3 cm due to atmospheric drag. A prelaunch assessment of the current modeling accuracies for these forces indicates that the radial orbit error requirements can be achieved with the current models, and can probably be surpassed once T/P tracking data are used to fine tune the models. Provided that the performance of the T/P spacecraft is nominal, the precise orbits computed by the T/P POD Team should be accurate to 13 cm or better radially.« less

  20. FPGA-Based High-Performance Embedded Systems for Adaptive Edge Computing in Cyber-Physical Systems: The ARTICo³ Framework.

    PubMed

    Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo

    2018-06-08

    Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.

  1. 32 CFR 505.8 - Training requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... managers, computer systems development personnel, computer systems operations personnel, statisticians... Act requires all heads of Army Staff agencies, field operating agencies, direct reporting units, Major... in the design, development, operation, and maintenance of any Privacy Act system of records and to...

  2. 32 CFR 505.8 - Training requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... managers, computer systems development personnel, computer systems operations personnel, statisticians... Act requires all heads of Army Staff agencies, field operating agencies, direct reporting units, Major... in the design, development, operation, and maintenance of any Privacy Act system of records and to...

  3. 32 CFR 505.8 - Training requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... managers, computer systems development personnel, computer systems operations personnel, statisticians... Act requires all heads of Army Staff agencies, field operating agencies, direct reporting units, Major... in the design, development, operation, and maintenance of any Privacy Act system of records and to...

  4. 32 CFR 505.8 - Training requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... managers, computer systems development personnel, computer systems operations personnel, statisticians... Act requires all heads of Army Staff agencies, field operating agencies, direct reporting units, Major... in the design, development, operation, and maintenance of any Privacy Act system of records and to...

  5. Mapping land cover change over continental Africa using Landsat and Google Earth Engine cloud computing.

    PubMed

    Midekisa, Alemayehu; Holl, Felix; Savory, David J; Andrade-Pacheco, Ricardo; Gething, Peter W; Bennett, Adam; Sturrock, Hugh J W

    2017-01-01

    Quantifying and monitoring the spatial and temporal dynamics of the global land cover is critical for better understanding many of the Earth's land surface processes. However, the lack of regularly updated, continental-scale, and high spatial resolution (30 m) land cover data limit our ability to better understand the spatial extent and the temporal dynamics of land surface changes. Despite the free availability of high spatial resolution Landsat satellite data, continental-scale land cover mapping using high resolution Landsat satellite data was not feasible until now due to the need for high-performance computing to store, process, and analyze this large volume of high resolution satellite data. In this study, we present an approach to quantify continental land cover and impervious surface changes over a long period of time (15 years) using high resolution Landsat satellite observations and Google Earth Engine cloud computing platform. The approach applied here to overcome the computational challenges of handling big earth observation data by using cloud computing can help scientists and practitioners who lack high-performance computational resources.

  6. Mapping land cover change over continental Africa using Landsat and Google Earth Engine cloud computing

    PubMed Central

    Holl, Felix; Savory, David J.; Andrade-Pacheco, Ricardo; Gething, Peter W.; Bennett, Adam; Sturrock, Hugh J. W.

    2017-01-01

    Quantifying and monitoring the spatial and temporal dynamics of the global land cover is critical for better understanding many of the Earth’s land surface processes. However, the lack of regularly updated, continental-scale, and high spatial resolution (30 m) land cover data limit our ability to better understand the spatial extent and the temporal dynamics of land surface changes. Despite the free availability of high spatial resolution Landsat satellite data, continental-scale land cover mapping using high resolution Landsat satellite data was not feasible until now due to the need for high-performance computing to store, process, and analyze this large volume of high resolution satellite data. In this study, we present an approach to quantify continental land cover and impervious surface changes over a long period of time (15 years) using high resolution Landsat satellite observations and Google Earth Engine cloud computing platform. The approach applied here to overcome the computational challenges of handling big earth observation data by using cloud computing can help scientists and practitioners who lack high-performance computational resources. PMID:28953943

  7. Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud.

    PubMed

    Cianfrocco, Michael A; Leschziner, Andres E

    2015-05-08

    The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available 'off-the-shelf' computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16-480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM.

  8. The Regulation of Medical Computer Software as a “Device” under the Food, Drug, and Cosmetic Act

    PubMed Central

    Brannigan, Vincent

    1986-01-01

    Recent developments in computer software have raised the possibility that federal regulators may claim to control medical computer software as a “device” under the Food, Drug and Cosmetic Act. The purpose of this paper is to analyze the FDCA to determine whether computer software is included in the statutory scheme, examine constitutional arguments relating to computer software, and discuss regulatory principles that should be taken into account when deciding appropriate regulation. This paper is limited to computer program output used by humans in deciding appropriate medical therapy for a patient.

  9. Predicting Academic Success in First-Year Mathematics Courses Using ACT Mathematics Scores and High School Grade Point Average

    ERIC Educational Resources Information Center

    Mayo, Sandra Sims

    2012-01-01

    Improving college performance and retention is a daunting task for colleges and universities. Many institutions are taking action to increase retention rates by exploring their academic programs. Regression analysis was used to compare the effectiveness of ACT mathematics scores, high school grade point averages (HSGPA), and demographic factors…

  10. InfoMall: An Innovative Strategy for High-Performance Computing and Communications Applications Development.

    ERIC Educational Resources Information Center

    Mills, Kim; Fox, Geoffrey

    1994-01-01

    Describes the InfoMall, a program led by the Northeast Parallel Architectures Center (NPAC) at Syracuse University (New York). The InfoMall features a partnership of approximately 24 organizations offering linked programs in High Performance Computing and Communications (HPCC) technology integration, software development, marketing, education and…

  11. Appropriate Use Policy | High-Performance Computing | NREL

    Science.gov Websites

    users of the National Renewable Energy Laboratory (NREL) High Performance Computing (HPC) resources government agency, National Laboratory, University, or private entity, the intellectual property terms (if issued a multifactor token which may be a physical token or a virtual token used with one-time password

  12. High Performance Computing and Networking for Science--Background Paper.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…

  13. High Performance Computing and Communications Panel Report.

    ERIC Educational Resources Information Center

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  14. Web Based Parallel Programming Workshop for Undergraduate Education.

    ERIC Educational Resources Information Center

    Marcus, Robert L.; Robertson, Douglass

    Central State University (Ohio), under a contract with Nichols Research Corporation, has developed a World Wide web based workshop on high performance computing entitled "IBN SP2 Parallel Programming Workshop." The research is part of the DoD (Department of Defense) High Performance Computing Modernization Program. The research…

  15. Psychomotor Impairment Detection via Finger Interactions with a Computer Keyboard During Natural Typing

    NASA Astrophysics Data System (ADS)

    Giancardo, L.; Sánchez-Ferro, A.; Butterworth, I.; Mendoza, C. S.; Hooker, J. M.

    2015-04-01

    Modern digital devices and appliances are capable of monitoring the timing of button presses, or finger interactions in general, with a sub-millisecond accuracy. However, the massive amount of high resolution temporal information that these devices could collect is currently being discarded. Multiple studies have shown that the act of pressing a button triggers well defined brain areas which are known to be affected by motor-compromised conditions. In this study, we demonstrate that the daily interaction with a computer keyboard can be employed as means to observe and potentially quantify psychomotor impairment. We induced a psychomotor impairment via a sleep inertia paradigm in 14 healthy subjects, which is detected by our classifier with an Area Under the ROC Curve (AUC) of 0.93/0.91. The detection relies on novel features derived from key-hold times acquired on standard computer keyboards during an uncontrolled typing task. These features correlate with the progression to psychomotor impairment (p < 0.001) regardless of the content and language of the text typed, and perform consistently with different keyboards. The ability to acquire longitudinal measurements of subtle motor changes from a digital device without altering its functionality may allow for early screening and follow-up of motor-compromised neurodegenerative conditions, psychological disorders or intoxication at a negligible cost in the general population.

  16. Dense, Efficient Chip-to-Chip Communication at the Extremes of Computing

    ERIC Educational Resources Information Center

    Loh, Matthew

    2013-01-01

    The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural…

  17. High Skin Temperature and Hypohydration Impair Aerobic Performance

    DTIC Science & Technology

    2012-01-01

    hypohydration) in impairing submaximal aerobic performance. Hot skin is associated with high skin blood flow requirements and hypohydration is...the aerobic performance impairment (-1.5% for each l°C skin temperature). We conclude that hot skin ( high skin blood flow requirements from narrow...associated with high skin blood flow requirements and hypohydration is associated with reduced cardiac filling, both of which act to reduce aerobic

  18. Outside influence: The sense of agency takes into account what is in our surroundings.

    PubMed

    Hon, Nicholas; Seow, Yin-Yi; Pereira, Don

    2018-05-01

    We are quite capable of distinguishing those outcomes we cause from those we do not. This ability to sense self-agency is thought to be produced by a comparison between a predictive representation of an outcome and the actual outcome that occurs. It is unclear, though, specifically what types of information can be entered into agency computations. Here, we demonstrate that information from non-target stimuli (stimuli that are not directly acted upon) incidentally present in our surroundings can influence predictions of outcomes, consequently modulating the sense of agency over clearly-defined target outcomes (those that occur to acted-upon stimuli). This provides the first evidence that our sense of agency is contextualized with respect to what is in our immediate visual environment. Furthermore, our data suggest that agency computations, instead of just a single comparison, may involve comparisons performed in stages, with different stages involving different types/classes of information. A model of such multi-stage comparisons is described. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  20. Computer Pornography and Child Exploitation Prevention Act. Hearing before the Subcommittee on Juvenile Justice of the Committee on the Judiciary. United States Senate, Ninety-Ninth Congress, First Session on S. 1305 (October 1, 1985).

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on the Judiciary.

    The Computer Pornography and Child Exploitation Prevention Act would establish criminal penalties for the transmission by computer of obscene matter, or by computer or other means, of matter pertaining to the sexual exploitation of children. Opening statements by Senators Trible, Denton, Specter, and McConnell are presented. The text of the bill…

  1. Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob

    2003-01-01

    The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.

  2. High Performance Computing and Communications Program. Hearing before the Subcommittee on Science of the Committee on Science, Space, and Technology. U.S. House of Representatives, One Hundred Third Congress, First Session (October 26, 1993).

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.

    This hearing explores how the High Performance Computing and Communications Program (HPCC) relates to the technology needs of industry. Testimony and prepared statements from the following witnesses on future effects of computing and networking technologies on their companies are included: (1) F. Brett Berlin, president, Brett Berlin Associates,…

  3. [Earth Science Technology Office's Computational Technologies Project

    NASA Technical Reports Server (NTRS)

    Fischer, James (Technical Monitor); Merkey, Phillip

    2005-01-01

    This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.

  4. Co-percolation to tune conductive behaviour in dynamical metallic nanowire networks.

    PubMed

    Fairfield, J A; Rocha, C G; O'Callaghan, C; Ferreira, M S; Boland, J J

    2016-11-03

    Nanowire networks act as self-healing smart materials, whose sheet resistance can be tuned via an externally applied voltage stimulus. This memristive response occurs due to modification of junction resistances to form a connectivity path across the lowest barrier junctions in the network. While most network studies have been performed on expensive noble metal nanowires like silver, networks of inexpensive nickel nanowires with a nickel oxide coating can also demonstrate resistive switching, a common feature of metal oxides with filamentary conduction. However, networks made from solely nickel nanowires have high operation voltages which prohibit large-scale material applications. Here we show, using both experiment and simulation, that a heterogeneous network of nickel and silver nanowires allows optimization of the activation voltage, as well as tuning of the conduction behavior to be either resistive switching, memristive, or a combination of both. Small percentages of silver nanowires, below the percolation threshold, induce these changes in electrical behaviour, even for low area coverage and hence very transparent films. Silver nanowires act as current concentrators, amplifying conductivity locally as shown in our computational dynamical activation framework for networks of junctions. These results demonstrate that a heterogeneous nanowire network can act as a cost-effective adaptive material with minimal use of noble metal nanowires, without losing memristive behaviour that is essential for smart sensing and neuromorphic applications.

  5. Design and Analysis Tools for Concurrent Blackboard Systems

    NASA Technical Reports Server (NTRS)

    McManus, John W.

    1991-01-01

    A blackboard system consists of a set of knowledge sources, a blackboard data structure, and a control strategy used to activate the knowledge sources. The blackboard model of problem solving is best described by Dr. H. Penny Nii of the Stanford University AI Laboratory: "A Blackboard System can be viewed as a collection of intelligent agents who are gathered around a blackboard, looking at pieces of information written on it, thinking about the current state of the solution, and writing their conclusions on the blackboard as they generate them. " The blackboard is a centralized global data structure, often partitioned in a hierarchical manner, used to represent the problem domain. The blackboard is also used to allow inter-knowledge source communication and acts as a shared memory visible to all of the knowledge sources. A knowledge source is a highly specialized, highly independent process that takes inputs from the blackboard data structure, performs a computation, and places the results of the computation in the blackboard data structure. This design allows for an opportunistic control strategy. The opportunistic problem-solving technique allows a knowledge source to contribute towards the solution of the current problem without knowing which of the other knowledge sources will use the information. The use of opportunistic problem-solving allows the data transfers on the blackboard to determine which processes are active at a given time. Designing and developing blackboard systems is a difficult process. The designer is trying to balance several conflicting goals and achieve a high degree of concurrent knowledge source execution while maintaining both knowledge and semantic consistency on the blackboard. Blackboard systems have not attained their apparent potential because there are no established tools or methods to guide in their construction or analyze their performance.

  6. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less

  7. Multidisciplinary Design Optimization of a Full Vehicle with High Performance Computing

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Gu, L.; Tho, C. H.; Sobieszczanski-Sobieski, Jaroslaw

    2001-01-01

    Multidisciplinary design optimization (MDO) of a full vehicle under the constraints of crashworthiness, NVH (Noise, Vibration and Harshness), durability, and other performance attributes is one of the imperative goals for automotive industry. However, it is often infeasible due to the lack of computational resources, robust simulation capabilities, and efficient optimization methodologies. This paper intends to move closer towards that goal by using parallel computers for the intensive computation and combining different approximations for dissimilar analyses in the MDO process. The MDO process presented in this paper is an extension of the previous work reported by Sobieski et al. In addition to the roof crush, two full vehicle crash modes are added: full frontal impact and 50% frontal offset crash. Instead of using an adaptive polynomial response surface method, this paper employs a DOE/RSM method for exploring the design space and constructing highly nonlinear crash functions. Two NMO strategies are used and results are compared. This paper demonstrates that with high performance computing, a conventionally intractable real world full vehicle multidisciplinary optimization problem considering all performance attributes with large number of design variables become feasible.

  8. Onward to Petaflops Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    With programs such as the US High Performance Computing and Communications Program (HPCCP), the attention of scientists and engineers worldwide has been focused on the potential of very high performance scientific computing, namely systems that are hundreds or thousands of times more powerful than those typically available in desktop systems at any given point in time. Extending the frontiers of computing in this manner has resulted in remarkable advances, both in computing technology itself and also in the various scientific and engineering disciplines that utilize these systems. Within the month or two, a sustained rate of 1 Tflop/s (also written 1 teraflops, or 10(exp 12) floating-point operations per second) is likely to be achieved by the 'ASCI Red' system at Sandia National Laboratory in New Mexico. With this objective in sight, it is reasonable to ask what lies ahead for high-end computing.

  9. Exascale computing and big data

    DOE PAGES

    Reed, Daniel A.; Dongarra, Jack

    2015-06-25

    Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less

  10. Exascale computing and big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, Daniel A.; Dongarra, Jack

    Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less

  11. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    ERIC Educational Resources Information Center

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  12. Benchmarking high performance computing architectures with CMS’ skeleton framework

    NASA Astrophysics Data System (ADS)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  13. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less

  15. Tse computers. [Chinese pictograph character binary image processor design for high speed applications

    NASA Technical Reports Server (NTRS)

    Strong, J. P., III

    1973-01-01

    Tse computers have the potential of operating four or five orders of magnitude faster than present digital computers. The computers of the new design use binary images as their basic computational entity. The word 'tse' is the transliteration of the Chinese word for 'pictograph character.' Tse computers are large collections of devices that perform logical operations on binary images. The operations on binary images are to be performed over the entire image simultaneously.

  16. The Computer Literacy Act, H.R. 3750 and The National Educational Software Act, H.R. 4628. Hearing before the Subcommittee on Science, Research and Technology of the Committee on Science and Technology, House of Representatives, Ninety-Eighth Congress, Second Session, June 5, 1984. No. 107.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Science and Technology.

    This legislative report offers testimony and related materials concerning two bills that address the issues of the computer in the classroom as an educational tool, access to computers, teacher training, and software development through the establishment of a National Computer Educational Software Corporation. Testimony of the following witnesses…

  17. Numerical Stability and Control Analysis Towards Falling-Leaf Prediction Capabilities of Splitflow for Two Generic High-Performance Aircraft Models

    NASA Technical Reports Server (NTRS)

    Charlton, Eric F.

    1998-01-01

    Aerodynamic analysis are performed using the Lockheed-Martin Tactical Aircraft Systems (LMTAS) Splitflow computational fluid dynamics code to investigate the computational prediction capabilities for vortex-dominated flow fields of two different tailless aircraft models at large angles of attack and sideslip. These computations are performed with the goal of providing useful stability and control data to designers of high performance aircraft. Appropriate metrics for accuracy, time, and ease of use are determined in consultations with both the LMTAS Advanced Design and Stability and Control groups. Results are obtained and compared to wind-tunnel data for all six components of forces and moments. Moment data is combined to form a "falling leaf" stability analysis. Finally, a handful of viscous simulations were also performed to further investigate nonlinearities and possible viscous effects in the differences between the accumulated inviscid computational and experimental data.

  18. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.

  19. Mesh Network Architecture for Enabling Inter-Spacecraft Communication

    NASA Technical Reports Server (NTRS)

    Becker, Christopher; Merrill, Garrick

    2017-01-01

    To enable communication between spacecraft operating in a formation or small constellation, a mesh network architecture was developed and tested using a time division multiple access (TDMA) communication scheme. The network is designed to allow for the exchange of telemetry and other data between spacecraft to enable collaboration between small spacecraft. The system uses a peer-to-peer topology with no central router, so that it does not have a single point of failure. The mesh network is dynamically configurable to allow for addition and subtraction of new spacecraft into the communication network. Flight testing was performed using an unmanned aerial system (UAS) formation acting as a spacecraft analogue and providing a stressing environment to prove mesh network performance. The mesh network was primarily devised to provide low latency, high frequency communication but is flexible and can also be configured to provide higher bandwidth for applications desiring high data throughput. The network includes a relay functionality that extends the maximum range between spacecraft in the network by relaying data from node to node. The mesh network control is implemented completely in software making it hardware agnostic, thereby allowing it to function with a wide variety of existing radios and computing platforms..

  20. Safety focused modeling of lithium-ion batteries: A review

    NASA Astrophysics Data System (ADS)

    Abada, S.; Marlair, G.; Lecocq, A.; Petit, M.; Sauvant-Moynot, V.; Huet, F.

    2016-02-01

    Safety issues pertaining to Li-ion batteries justify intensive testing all along their value chain. However, progress in scientific knowledge regarding lithium based battery failure modes, as well as remarkable technologic breakthroughs in computing science, now allow for development and use of prediction tools to assist designers in developing safer batteries. Subsequently, this paper offers a review of significant modeling works performed in the area with a focus on the characterization of the thermal runaway hazard and their relating triggering events. Progress made in models aiming at integrating battery ageing effect and related physics is also discussed, as well as the strong interaction with modeling-focused use of testing, and the main achievements obtained towards marketing safer systems. Current limitations and new challenges or opportunities that are expected to shape future modeling activity are also put in perspective. According to market trends, it is anticipated that safety may still act as a restraint in the search for acceptable compromise with overall performance and cost of lithium-ion based and post lithium-ion rechargeable batteries of the future. In that context, high-throughput prediction tools capable of screening adequate new components properties allowing access to both functional and safety related aspects are highly desirable.

  1. Load Balancing Integrated Least Slack Time-Based Appliance Scheduling for Smart Home Energy Management.

    PubMed

    Silva, Bhagya Nathali; Khan, Murad; Han, Kijun

    2018-02-25

    The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism.

  2. User Account Passwords | High-Performance Computing | NREL

    Science.gov Websites

    Account Passwords User Account Passwords For NREL's high-performance computing (HPC) systems, learn about user account password requirements and how to set up, log in, and change passwords. Password Logging In the First Time After you request an HPC user account, you'll receive a temporary password. Set

  3. Numerical Prediction of Pitch Damping Stability Derivatives for Finned Projectiles

    DTIC Science & Technology

    2013-11-01

    in part by a grant of high-performance computing time from the U.S. DOD High Performance Computing Modernization Program (HPCMP) at the Army...to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...12 3.3.2 Time -Accurate Simulations

  4. Data Security Policy | High-Performance Computing | NREL

    Science.gov Websites

    to use its high-performance computing (HPC) systems. NREL HPC systems are operated as research systems and may only contain data related to scientific research. These systems are categorized as low per sensitive or non-sensitive. One example of sensitive data would be personally identifiable information (PII

  5. Federal High Performance Computing and Communications Program. The Department of Energy Component.

    ERIC Educational Resources Information Center

    Department of Energy, Washington, DC. Office of Energy Research.

    This report, profusely illustrated with color photographs and other graphics, elaborates on the Department of Energy (DOE) research program in High Performance Computing and Communications (HPCC). The DOE is one of seven agency programs within the Federal Research and Development Program working on HPCC. The DOE HPCC program emphasizes research in…

  6. Training | High-Performance Computing | NREL

    Science.gov Websites

    Training Training Find training resources for using NREL's high-performance computing (HPC) systems as well as related online tutorials. Upcoming Training HPC User Workshop - June 12th We will be Conference, a group meets to discuss Best Practices in HPC Training. This group developed a list of resources

  7. Shared Storage Usage Policy | High-Performance Computing | NREL

    Science.gov Websites

    Shared Storage Usage Policy Shared Storage Usage Policy To use NREL's high-performance computing (HPC) systems, you must abide by the Shared Storage Usage Policy. /projects NREL HPC allocations include storage space in the /projects filesystem. However, /projects is a shared resource and project

  8. Semi-interpenetrating polymer network for tougher and more microcracking resistant high temperature polymers

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor)

    1992-01-01

    This invention is a semi-interpenetrating polymer network which includes a high performance thermosetting polyimide having a nadic end group acting as a crosslinking site and a high performance linear thermoplastic polyimide. An improved high temperature matrix resin is provided which is capable of performing at 316 C in air for several hundreds of hours. This resin has significantly improved toughness and microcracking resistance, excellent processability and mechanical performance, and cost effectiveness.

  9. Computational fluid dynamics study of the variable-pitch split-blade fan concept

    NASA Technical Reports Server (NTRS)

    Kepler, C. E.; Elmquist, A. R.; Davis, R. L.

    1992-01-01

    A computational fluid dynamics study was conducted to evaluate the feasibility of the variable-pitch split-blade supersonic fan concept. This fan configuration was conceived as a means to enable a supersonic fan to switch from the supersonic through-flow type of operation at high speeds to a conventional fan with subsonic inflow and outflow at low speeds. During this off-design, low-speed mode of operation, the fan would operate with a substantial static pressure rise across the blade row like a conventional transonic fan; the front (variable-pitch) blade would be aligned with the incoming flow, and the aft blade would remain fixed in the position set by the supersonic design conditions. Because of these geometrical features, this low speed configuration would inherently have a large amount of turning and, thereby, would have the potential for a large total pressure increase in a single stage. Such a high-turning blade configuration is prone to flow separation; it was hoped that the channeling of the flow between the blades would act like a slotted wing and help alleviate this problem. A total of 20 blade configurations representing various supersonic and transonic configurations were evaluated using a Navier Stokes CFD program called ADAPTNS because of its adaptive grid features. The flow fields generated by this computational procedure were processed by another data reduction program which calculated average flow properties and simulated fan performance. These results were employed to make quantitative comparisons and evaluations of blade performance. The supersonic split-blade configurations generated performance comparable to a single-blade supersonic, through-flow fan configuration. Simulated rotor total pressure ratios of the order of 2.5 or better were achieved for Mach 2.0 inflow conditions. The corresponding fan efficiencies were approximately 75 percent or better. The transonic split-blade configurations having large amounts of turning were able to generate large amounts of total turning and achieve simulated total pressure ratios of 3.0 or better with subsonic inflow conditions. These configurations had large losses and low fan efficiencies in the 70's percent. They had large separated regions and low velocity wakes. Additional turning and diffusion of this flow in a subsequent stator row would probably be very inefficient. The high total pressure ratios indicated by the rotor performance would be substantially reduced by the stators, and the stage efficiency would be substantially lower. Such performance leaves this dual-mode fan concept less attractive than originally postulated.

  10. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  11. Validity Evidence for ACT Compass® Placement Tests. ACT Research Report Series 2014 (2)

    ERIC Educational Resources Information Center

    Westrick, Paul A.; Allen, Jeff

    2014-01-01

    We examined the validity of using Compass® test scores and high school grade point average (GPA) for placing students in first-year college courses and for identifying students at risk of not succeeding. Consistent with other research, the combination of high school GPA and Compass scores performed better than either measure used alone. Results…

  12. Navier-Stokes predictions of pitch damping for axisymmetric shell using steady coning motion

    NASA Technical Reports Server (NTRS)

    Weinacht, Paul; Sturek, Walter B.; Schiff, Lewis B.

    1991-01-01

    Previous theoretical investigations have proposed that the side force and moment acting on a body of revolution in steady coning motion could be related to the pitch-damping force and moment. In the current research effort, this approach is applied to produce predictions of the pitch damping for axisymmetric shell. The flow fields about these projectiles undergoing steady coning motion are successfully computed using a parabolized Navier-Stokes computational approach which makes use of a rotating coordinate frame. The governing equations are modified to include the centrifugal and Coriolis force terms due to the rotating coordinate frame. From the computed flow field, the side moments due to coning motion, spinning motion, and combined spinning and coning motion are used to determine the pitch-damping coefficients. Computations are performed for two generic shell configurations, a secant-ogive-cylinder and a secant-ogive-cylinder-boattail.

  13. Validation of MCNP6 Version 1.0 with the ENDF/B-VII.1 Cross Section Library for Plutonium Metals, Oxides, and Solutions on the High Performance Computing Platform Moonlight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Bryan Scott; Gough, Sean T.

    This report documents a validation of the MCNP6 Version 1.0 computer code on the high performance computing platform Moonlight, for operations at Los Alamos National Laboratory (LANL) that involve plutonium metals, oxides, and solutions. The validation is conducted using the ENDF/B-VII.1 continuous energy group cross section library at room temperature. The results are for use by nuclear criticality safety personnel in performing analysis and evaluation of various facility activities involving plutonium materials.

  14. High Performance Computing of Meshless Time Domain Method on Multi-GPU Cluster

    NASA Astrophysics Data System (ADS)

    Ikuno, Soichiro; Nakata, Susumu; Hirokawa, Yuta; Itoh, Taku

    2015-01-01

    High performance computing of Meshless Time Domain Method (MTDM) on multi-GPU using the supercomputer HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences) at University of Tsukuba is investigated. Generally, the finite difference time domain (FDTD) method is adopted for the numerical simulation of the electromagnetic wave propagation phenomena. However, the numerical domain must be divided into rectangle meshes, and it is difficult to adopt the problem in a complexed domain to the method. On the other hand, MTDM can be easily adept to the problem because MTDM does not requires meshes. In the present study, we implement MTDM on multi-GPU cluster to speedup the method, and numerically investigate the performance of the method on multi-GPU cluster. To reduce the computation time, the communication time between the decomposed domain is hided below the perfect matched layer (PML) calculation procedure. The results of computation show that speedup of MTDM on 128 GPUs is 173 times faster than that of single CPU calculation.

  15. Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena

    2010-09-30

    Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less

  16. Developing a Complete and Effective ACT-R Architecture

    DTIC Science & Technology

    2008-01-01

    of computational primitives , as contrasted with the predominant “one-off” and “grab-bag” cognitive models in the field. These architectures have...transport/ semaphore protocols connected via a glue script. Both protocols rely on the fact that file rename and file remove operations are atomic...the Trial Log file until just prior to processing the next input request. Thus, to perform synchronous identifications it is necessary to run an

  17. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  18. Quantification of signal detection performance degradation induced by phase-retrieval in propagation-based x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    Chou, Cheng-Ying; Anastasio, Mark A.

    2016-04-01

    In propagation-based X-ray phase-contrast (PB XPC) imaging, the measured image contains a mixture of absorption- and phase-contrast. To obtain separate images of the projected absorption and phase (i.e., refractive) properties of a sample, phase retrieval methods can be employed. It has been suggested that phase-retrieval can always improve image quality in PB XPC imaging. However, when objective (task-based) measures of image quality are employed, this is not necessarily true and phase retrieval can be detrimental. In this work, signal detection theory is utilized to quantify the performance of a Hotelling observer (HO) for detecting a known signal in a known background. Two cases are considered. In the first case, the HO acts directly on the measured intensity data. In the second case, the HO acts on either the retrieved phase or absorption image. We demonstrate that the performance of the HO is superior when acting on the measured intensity data. The loss of task-specific information induced by phase-retrieval is quantified by computing the efficiency of the HO as the ratio of the test statistic signal-to-noise ratio (SNR) for the two cases. The effect of the system geometry on this efficiency is systematically investigated. Our findings confirm that phase-retrieval can impair signal detection performance in XPC imaging.

  19. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    NASA Astrophysics Data System (ADS)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.

  20. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  1. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  2. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  3. Progress in a novel architecture for high performance processing

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin

    2018-04-01

    The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).

  4. Mindstorms robots and the application of cognitive load theory in introductory programming

    NASA Astrophysics Data System (ADS)

    Mason, Raina; Cooper, Graham

    2013-12-01

    This paper reports on a series of introductory programming workshops, initially targeting female high school students, which utilised Lego Mindstorms robots. Cognitive load theory (CLT) was applied to the instructional design of the workshops, and a controlled experiment was also conducted investigating aspects of the interface. Results indicated that a truncated interface led to better learning by novice programmers as measured by test performance by participants, as well as enhanced shifts in self-efficacy and lowered perception of difficulty. There was also a transfer effect to another programming environment (Alice). It is argued that the results indicate that for novice programmers, the mere presence on-screen of additional (redundant) entities acts as a form of tacit distraction, thus impeding learning. The utility of CLT to analyse, design and deliver aspects of computer programming environments and instructional materials is discussed.

  5. Computers, Technology, and Disability. [Update.

    ERIC Educational Resources Information Center

    American Council on Education, Washington, DC. HEATH Resource Center.

    This paper describes programs and resources that focus on access of postsecondary students with disabilities to computers and other forms of technology. Increased access to technological devices and services is provided to students with disabilities under the Technology-Related Assistance for Individuals with Disabilities Act (Tech Act). Section…

  6. Analyzing Dynamics of Cooperating Spacecraft

    NASA Technical Reports Server (NTRS)

    Hughes, Stephen P.; Folta, David C.; Conway, Darrel J.

    2004-01-01

    A software library has been developed to enable high-fidelity computational simulation of the dynamics of multiple spacecraft distributed over a region of outer space and acting with a common purpose. All of the modeling capabilities afforded by this software are available independently in other, separate software systems, but have not previously been brought together in a single system. A user can choose among several dynamical models, many high-fidelity environment models, and several numerical-integration schemes. The user can select whether to use models that assume weak coupling between spacecraft, or strong coupling in the case of feedback control or tethering of spacecraft to each other. For weak coupling, spacecraft orbits are propagated independently, and are synchronized in time by controlling the step size of the integration. For strong coupling, the orbits are integrated simultaneously. Among the integration schemes that the user can choose are Runge-Kutta Verner, Prince-Dormand, Adams-Bashforth-Moulton, and Bulirsh- Stoer. Comparisons of performance are included for both the weak- and strongcoupling dynamical models for all of the numerical integrators.

  7. Computational Analysis of a Prototype Martian Rotorcraft Experiment

    NASA Technical Reports Server (NTRS)

    Corfeld, Kelly J.; Strawn, Roger C.; Long, Lyle N.

    2002-01-01

    This paper presents Reynolds-averaged Navier-Stokes calculations for a prototype Martian rotorcraft. The computations are intended for comparison with an ongoing Mars rotor hover test at NASA Ames Research Center. These computational simulations present a new and challenging problem, since rotors that operate on Mars will experience a unique low Reynolds number and high Mach number environment. Computed results for the 3-D rotor differ substantially from 2-D sectional computations in that the 3-D results exhibit a stall delay phenomenon caused by rotational forces along the blade span. Computational results have yet to be compared to experimental data, but computed performance predictions match the experimental design goals fairly well. In addition, the computed results provide a high level of detail in the rotor wake and blade surface aerodynamics. These details provide an important supplement to the expected experimental performance data.

  8. A methodology to emulate and evaluate a productive virtual workstation

    NASA Technical Reports Server (NTRS)

    Krubsack, David; Haberman, David

    1992-01-01

    The Advanced Display and Computer Augmented Control (ADCACS) Program at ACT is sponsored by NASA Ames to investigate the broad field of technologies which must be combined to design a 'virtual' workstation for the Space Station Freedom. This program is progressing in several areas and resulted in the definition of requirements for a workstation. A unique combination of technologies at the ACT Laboratory have been networked to effectively create an experimental environment. This experimental environment allows the integration of nonconventional input devices with a high power graphics engine within the framework of an expert system shell which coordinates the heterogeneous inputs with the 'virtual' presentation. The flexibility of the workstation is evolved as experiments are designed and conducted to evaluate the condition descriptions and rule sets of the expert system shell and its effectiveness in driving the graphics engine. Workstation productivity has been defined by the achievable performance in the emulator of the calibrated 'sensitivity' of input devices, the graphics presentation, the possible optical enhancements to achieve a wide field of view color image and the flexibility of conditional descriptions in the expert system shell in adapting to prototype problems.

  9. Cell-NPE (Numerical Performance Evaluation): Programming the IBM Cell Broadband Engine -- A General Parallelization Strategy

    DTIC Science & Technology

    2008-04-01

    Space GmbH as follows: B. TECHNICAL PRPOPOSA/DESCRIPTION OF WORK Cell: A Revolutionary High Performance Computing Platform On 29 June 2005 [1...IBM has announced that is has partnered with Mercury Computer Systems, a maker of specialized computers . The Cell chip provides massive floating-point...the computing industry away from the traditional processor technology dominated by Intel. While in the past, the development of computing power has

  10. Multi-qubit gates protected by adiabaticity and dynamical decoupling applicable to donor qubits in silicon

    DOE PAGES

    Witzel, Wayne; Montano, Ines; Muller, Richard P.; ...

    2015-08-19

    In this paper, we present a strategy for producing multiqubit gates that promise high fidelity with minimal tuning requirements. Our strategy combines gap protection from the adiabatic theorem with dynamical decoupling in a complementary manner. Energy-level transition errors are protected by adiabaticity and remaining phase errors are mitigated via dynamical decoupling. This is a powerful way to divide and conquer the various error channels. In order to accomplish this without violating a no-go theorem regarding black-box dynamically corrected gates [Phys. Rev. A 80, 032314 (2009)], we require a robust operating point (sweet spot) in control space where the qubits interactmore » with little sensitivity to noise. There are also energy gap requirements for effective adiabaticity. We apply our strategy to an architecture in Si with P donors where we assume we can shuttle electrons between different donors. Electron spins act as mobile ancillary qubits and P nuclear spins act as long-lived data qubits. Furthermore, this system can have a very robust operating point where the electron spin is bound to a donor in the quadratic Stark shift regime. High fidelity single qubit gates may be performed using well-established global magnetic resonance pulse sequences. Single electron-spin preparation and measurement has also been demonstrated. Thus, putting this all together, we present a robust universal gate set for quantum computation.« less

  11. High-speed multiple sequence alignment on a reconfigurable platform.

    PubMed

    Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf

    2006-01-01

    Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.

  12. An Engineering Model of Human Balance Control-Part I: Biomechanical Model.

    PubMed

    Barton, Joseph E; Roy, Anindo; Sorkin, John D; Rogers, Mark W; Macko, Richard

    2016-01-01

    We developed a balance measurement tool (the balanced reach test (BRT)) to assess standing balance while reaching and pointing to a target moving in three-dimensional space according to a sum-of-sines function. We also developed a three-dimensional, 13-segment biomechanical model to analyze performance in this task. Using kinematic and ground reaction force (GRF) data from the BRT, we performed an inverse dynamics analysis to compute the forces and torques applied at each of the joints during the course of a 90 s test. We also performed spectral analyses of each joint's force activations. We found that the joints act in a different but highly coordinated manner to accomplish the tracking task-with individual joints responding congruently to different portions of the target disk's frequency spectrum. The test and the model also identified clear differences between a young healthy subject (YHS), an older high fall risk (HFR) subject before participating in a balance training intervention; and in the older subject's performance after training (which improved to the point that his performance approached that of the young subject). This is the first phase of an effort to model the balance control system with sufficient physiological detail and complexity to accurately simulate the multisegmental control of balance during functional reach across the spectra of aging, medical, and neurological conditions that affect performance. Such a model would provide insight into the function and interaction of the biomechanical and neurophysiological elements making up this system; and system adaptations to changes in these elements' performance and capabilities.

  13. An Engineering Model of Human Balance Control—Part I: Biomechanical Model

    PubMed Central

    Barton, Joseph E.; Roy, Anindo; Sorkin, John D.; Rogers, Mark W.; Macko, Richard

    2016-01-01

    We developed a balance measurement tool (the balanced reach test (BRT)) to assess standing balance while reaching and pointing to a target moving in three-dimensional space according to a sum-of-sines function. We also developed a three-dimensional, 13-segment biomechanical model to analyze performance in this task. Using kinematic and ground reaction force (GRF) data from the BRT, we performed an inverse dynamics analysis to compute the forces and torques applied at each of the joints during the course of a 90 s test. We also performed spectral analyses of each joint's force activations. We found that the joints act in a different but highly coordinated manner to accomplish the tracking task—with individual joints responding congruently to different portions of the target disk's frequency spectrum. The test and the model also identified clear differences between a young healthy subject (YHS), an older high fall risk (HFR) subject before participating in a balance training intervention; and in the older subject's performance after training (which improved to the point that his performance approached that of the young subject). This is the first phase of an effort to model the balance control system with sufficient physiological detail and complexity to accurately simulate the multisegmental control of balance during functional reach across the spectra of aging, medical, and neurological conditions that affect performance. Such a model would provide insight into the function and interaction of the biomechanical and neurophysiological elements making up this system; and system adaptations to changes in these elements' performance and capabilities. PMID:26328608

  14. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    PubMed

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  15. Unsteady Thick Airfoil Aerodynamics: Experiments, Computation, and Theory

    NASA Technical Reports Server (NTRS)

    Strangfeld, C.; Rumsey, C. L.; Mueller-Vahl, H.; Greenblatt, D.; Nayeri, C. N.; Paschereit, C. O.

    2015-01-01

    An experimental, computational and theoretical investigation was carried out to study the aerodynamic loads acting on a relatively thick NACA 0018 airfoil when subjected to pitching and surging, individually and synchronously. Both pre-stall and post-stall angles of attack were considered. Experiments were carried out in a dedicated unsteady wind tunnel, with large surge amplitudes, and airfoil loads were estimated by means of unsteady surface mounted pressure measurements. Theoretical predictions were based on Theodorsen's and Isaacs' results as well as on the relatively recent generalizations of van der Wall. Both two- and three-dimensional computations were performed on structured grids employing unsteady Reynolds-averaged Navier-Stokes (URANS). For pure surging at pre-stall angles of attack, the correspondence between experiments and theory was satisfactory; this served as a validation of Isaacs theory. Discrepancies were traced to dynamic trailing-edge separation, even at low angles of attack. Excellent correspondence was found between experiments and theory for airfoil pitching as well as combined pitching and surging; the latter appears to be the first clear validation of van der Wall's theoretical results. Although qualitatively similar to experiment at low angles of attack, two-dimensional URANS computations yielded notable errors in the unsteady load effects of pitching, surging and their synchronous combination. The main reason is believed to be that the URANS equations do not resolve wake vorticity (explicitly modeled in the theory) or the resulting rolled-up un- steady flow structures because high values of eddy viscosity tend to \\smear" the wake. At post-stall angles, three-dimensional computations illustrated the importance of modeling the tunnel side walls.

  16. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  17. Exploiting Anti-T-shaped Graphene Architecture to Form Low Tortuosity, Sieve-like Interfaces for High-Performance Anodes for Li-Based Cells

    PubMed Central

    2017-01-01

    Graphitic carbon anodes have long been used in Li ion batteries due to their combination of attractive properties, such as low cost, high gravimetric energy density, and good rate capability. However, one significant challenge is controlling, and optimizing, the nature and formation of the solid electrolyte interphase (SEI). Here it is demonstrated that carbon coating via chemical vapor deposition (CVD) facilitates high electrochemical performance of carbon anodes. We examine and characterize the substrate/vertical graphene interface (multilayer graphene nanowalls coated onto carbon paper via plasma enhanced CVD), revealing that these low-tortuosity and high-selection graphene nanowalls act as fast Li ion transport channels. Moreover, we determine that the hitherto neglected parallel layer acts as a protective surface at the interface, enhancing the anode performance. In summary, these findings not only clarify the synergistic role of the parallel functional interface when combined with vertical graphene nanowalls but also have facilitated the development of design principles for future high rate, high performance batteries. PMID:29392179

  18. Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud

    PubMed Central

    Cianfrocco, Michael A; Leschziner, Andres E

    2015-01-01

    The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available ‘off-the-shelf’ computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16–480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM. DOI: http://dx.doi.org/10.7554/eLife.06664.001 PMID:25955969

  19. 78 FR 49524 - Privacy Act of 1974; CMS Computer Match No. 2013-08; HHS Computer Match No. 1309

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... by the Health Care and Education Reconciliation Act of 2010 (Pub. L. 111-152) (collectively, the ACA...). INCLUSIVE DATES OF THE MATCH: The CMP will become effective no sooner than 40 days after the report of the...

  20. 78 FR 50419 - Privacy Act of 1974; CMS Computer Match No. 2013-10; HHS Computer Match No. 1310

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... (Pub. L. 111- 148), as amended by the Health Care and Education Reconciliation Act of 2010 (Pub. L. 111... Entitlements Program System of Records Notice, 77 FR 47415 (August 8, 2012). Inclusive Dates of the Match: The...

Top