Sample records for modern computing power

  1. A Smart Home Test Bed for Undergraduate Education to Bridge the Curriculum Gap from Traditional Power Systems to Modernized Smart Grids

    ERIC Educational Resources Information Center

    Hu, Qinran; Li, Fangxing; Chen, Chien-fei

    2015-01-01

    There is a worldwide trend to modernize old power grid infrastructures to form future smart grids, which will achieve efficient, flexible energy consumption by using the latest technologies in communication, computing, and control. Smart grid initiatives are moving power systems curricula toward smart grids. Although the components of smart grids…

  2. Through Kazan ASPERA to Modern Projects

    NASA Astrophysics Data System (ADS)

    Gusev, Alexander; Kitiashvili, Irina; Petrova, Natasha

    Now the European Union form the Sixth Framework Programme. One of its the objects of the EU Programme is opening national researches and training programmes. The Russian PhD students and young astronomers have business and financial difficulties in access to modern databases and astronomical projects and so they has not been included in European overview of priorities. Modern requirements to the organization of observant projects on powerful telescopes assumes painstaking scientific computer preparation of the application. A rigid competition for observation time assume preliminary computer modeling of target object for success of the application. Kazan AstroGeoPhysics Partnership

  3. Application of enhanced modern structured analysis techniques to Space Station Freedom electric power system requirements

    NASA Technical Reports Server (NTRS)

    Biernacki, John; Juhasz, John; Sadler, Gerald

    1991-01-01

    A team of Space Station Freedom (SSF) system engineers are in the process of extensive analysis of the SSF requirements, particularly those pertaining to the electrical power system (EPS). The objective of this analysis is the development of a comprehensive, computer-based requirements model, using an enhanced modern structured analysis methodology (EMSA). Such a model provides a detailed and consistent representation of the system's requirements. The process outlined in the EMSA methodology is unique in that it allows the graphical modeling of real-time system state transitions, as well as functional requirements and data relationships, to be implemented using modern computer-based tools. These tools permit flexible updating and continuous maintenance of the models. Initial findings resulting from the application of EMSA to the EPS have benefited the space station program by linking requirements to design, providing traceability of requirements, identifying discrepancies, and fostering an understanding of the EPS.

  4. Design Trade-off Between Performance and Fault-Tolerance of Space Onboard Computers

    NASA Astrophysics Data System (ADS)

    Gorbunov, M. S.; Antonov, A. A.

    2017-01-01

    It is well known that there is a trade-off between performance and power consumption in onboard computers. The fault-tolerance is another important factor affecting performance, chip area and power consumption. Involving special SRAM cells and error-correcting codes is often too expensive with relation to the performance needed. We discuss the possibility of finding the optimal solutions for modern onboard computer for scientific apparatus focusing on multi-level cache memory design.

  5. Computer Problem-Solving Coaches for Introductory Physics: Design and Usability Studies

    ERIC Educational Resources Information Center

    Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew

    2016-01-01

    The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how…

  6. Oscillatory threshold logic.

    PubMed

    Borresen, Jon; Lynch, Stephen

    2012-01-01

    In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory.

  7. The virtual digital nuclear power plant: A modern tool for supporting the lifecycle of VVER-based nuclear power units

    NASA Astrophysics Data System (ADS)

    Arkadov, G. V.; Zhukavin, A. P.; Kroshilin, A. E.; Parshikov, I. A.; Solov'ev, S. L.; Shishov, A. V.

    2014-10-01

    The article describes the "Virtual Digital VVER-Based Nuclear Power Plant" computerized system comprising a totality of verified initial data (sets of input data for a model intended for describing the behavior of nuclear power plant (NPP) systems in design and emergency modes of their operation) and a unified system of new-generation computation codes intended for carrying out coordinated computation of the variety of physical processes in the reactor core and NPP equipment. Experiments with the demonstration version of the "Virtual Digital VVER-Based NPP" computerized system has shown that it is in principle possible to set up a unified system of computation codes in a common software environment for carrying out interconnected calculations of various physical phenomena at NPPs constructed according to the standard AES-2006 project. With the full-scale version of the "Virtual Digital VVER-Based NPP" computerized system put in operation, the concerned engineering, design, construction, and operating organizations will have access to all necessary information relating to the NPP power unit project throughout its entire lifecycle. The domestically developed commercial-grade software product set to operate as an independently operating application to the project will bring about additional competitive advantages in the modern market of nuclear power technologies.

  8. Oscillatory Threshold Logic

    PubMed Central

    Borresen, Jon; Lynch, Stephen

    2012-01-01

    In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory. PMID:23173034

  9. galario: Gpu Accelerated Library for Analyzing Radio Interferometer Observations

    NASA Astrophysics Data System (ADS)

    Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo

    2017-10-01

    The galario library exploits the computing power of modern graphic cards (GPUs) to accelerate the comparison of model predictions to radio interferometer observations. It speeds up the computation of the synthetic visibilities given a model image (or an axisymmetric brightness profile) and their comparison to the observations.

  10. Revisiting Mathematical Problem Solving and Posing in the Digital Era: Toward Pedagogically Sound Uses of Modern Technology

    ERIC Educational Resources Information Center

    Abramovich, S.

    2014-01-01

    The availability of sophisticated computer programs such as "Wolfram Alpha" has made many problems found in the secondary mathematics curriculum somewhat obsolete for they can be easily solved by the software. Against this background, an interplay between the power of a modern tool of technology and educational constraints it presents is…

  11. Image analysis in modern ophthalmology: from acquisition to computer assisted diagnosis and telemedicine

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Millán, María S.; Cristóbal, Gabriel; Gabarda, Salvador; Sorel, Michal; Sroubek, Filip

    2012-06-01

    Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract information about many diseases. Modern ophthalmology thrives and develops on the advances in digital imaging and computing power. In this work we present an overview of recent image processing techniques proposed by the authors in the area of digital eye fundus photography. Our applications range from retinal image quality assessment to image restoration via blind deconvolution and visualization of structural changes in time between patient visits. All proposed within a framework for improving and assisting the medical practice and the forthcoming scenario of the information chain in telemedicine.

  12. Green Day? An Old Mill City Leads a New Revolution in Massachusetts

    ERIC Educational Resources Information Center

    Brown, Robert A.

    2012-01-01

    The Northeast United States just experienced one of the region's worst natural disasters. Fortunately, because of the confluence of modern computing power and scientific computing methods, weather forecasting models predicted Sandy's very complicated trajectory and development with a precision that would not have been possible even a decade ago.…

  13. Joint the Center for Applied Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, Todd; Bremer, Timo; Van Essen, Brian

    The Center for Applied Scientific Computing serves as Livermore Lab’s window to the broader computer science, computational physics, applied mathematics, and data science research communities. In collaboration with academic, industrial, and other government laboratory partners, we conduct world-class scientific research and development on problems critical to national security. CASC applies the power of high-performance computing and the efficiency of modern computational methods to the realms of stockpile stewardship, cyber and energy security, and knowledge discovery for intelligence applications.

  14. A study of workstation computational performance for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Cleveland, Jeff I., II

    1995-01-01

    With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.

  15. Computational Insights into the Central Role of Nonbonding Interactions in Modern Covalent Organocatalysis

    DOE PAGES

    Walden, Daniel; Ogba, O. Maduka; Johnston, Ryne C.; ...

    2016-06-06

    The flexibility, complexity, and size of contemporary organocatalytic transformations pose interesting and powerful opportunities to computational and experimental chemists alike. In this Account, we disclose our recent computational investigations of three branches of organocatalysis in which nonbonding interactions, such as C–H···O/N interactions, play a crucial role in the organization of transition states, catalysis, and selectivity.

  16. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozacik, Stephen

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  17. A Survey of Architectural Techniques For Improving Cache Power Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    Modern processors are using increasingly larger sized on-chip caches. Also, with each CMOS technology generation, there has been a significant increase in their leakage energy consumption. For this reason, cache power management has become a crucial research issue in modern processor design. To address this challenge and also meet the goals of sustainable computing, researchers have proposed several techniques for improving energy efficiency of cache architectures. This paper surveys recent architectural techniques for improving cache power efficiency and also presents a classification of these techniques based on their characteristics. For providing an application perspective, this paper also reviews several real-worldmore » processor chips that employ cache energy saving techniques. The aim of this survey is to enable engineers and researchers to get insights into the techniques for improving cache power efficiency and motivate them to invent novel solutions for enabling low-power operation of caches.« less

  18. Systems Engineering Building Advances Power Grid Research

    ScienceCinema

    Virden, Jud; Huang, Henry; Skare, Paul; Dagle, Jeff; Imhoff, Carl; Stoustrup, Jakob; Melton, Ron; Stiles, Dennis; Pratt, Rob

    2018-01-16

    Researchers and industry are now better equipped to tackle the nation’s most pressing energy challenges through PNNL’s new Systems Engineering Building – including challenges in grid modernization, buildings efficiency and renewable energy integration. This lab links real-time grid data, software platforms, specialized laboratories and advanced computing resources for the design and demonstration of new tools to modernize the grid and increase buildings energy efficiency.

  19. Application of evolutionary computation in ECAD problems

    NASA Astrophysics Data System (ADS)

    Lee, Dae-Hyun; Hwang, Seung H.

    1998-10-01

    Design of modern electronic system is a complicated task which demands the use of computer- aided design (CAD) tools. Since a lot of problems in ECAD are combinatorial optimization problems, evolutionary computations such as genetic algorithms and evolutionary programming have been widely employed to solve those problems. We have applied evolutionary computation techniques to solve ECAD problems such as technology mapping, microcode-bit optimization, data path ordering and peak power estimation, where their benefits are well observed. This paper presents experiences and discusses issues in those applications.

  20. Optical computing.

    NASA Technical Reports Server (NTRS)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  1. Eastern Wind Data Set | Grid Modernization | NREL

    Science.gov Websites

    cell was computed by combining these data sets with a composite turbine power curve. Wind power plants wind speed at the site. Adjustments were made for model biases, wake losses, wind gusts, turbine and conversion was also updated to better reflect future wind turbine technology. The 12-hour discontinuity was

  2. Real-time depth processing for embedded platforms

    NASA Astrophysics Data System (ADS)

    Rahnama, Oscar; Makarov, Aleksej; Torr, Philip

    2017-05-01

    Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.

  3. Developing and applying modern methods of leakage monitoring and state estimation of fuel at the Novovoronezh nuclear power plant

    NASA Astrophysics Data System (ADS)

    Povarov, V. P.; Tereshchenko, A. B.; Kravchenko, Yu. N.; Pozychanyuk, I. V.; Gorobtsov, L. I.; Golubev, E. I.; Bykov, V. I.; Likhanskii, V. V.; Evdokimov, I. A.; Zborovskii, V. G.; Sorokin, A. A.; Kanyukova, V. D.; Aliev, T. N.

    2014-02-01

    The results of developing and implementing the modernized fuel leakage monitoring methods at the shut-down and running reactor of the Novovoronezh nuclear power plant (NPP) are presented. An automated computerized expert system integrated with an in-core monitoring system (ICMS) and installed at the Novovoronezh NPP unit no. 5 is described. If leaky fuel elements appear in the core, the system allows one to perform on-line assessment of the parameters of leaky fuel assemblies (FAs). The computer expert system units designed for optimizing the operating regimes and enhancing the fuel usage efficiency at the Novovoronezh NPP unit no. 5 are now being developed.

  4. Foundational Report Series. Advanced Distribution management Systems for Grid Modernization (Importance of DMS for Distribution Grid Modernization)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianhui

    2015-09-01

    Grid modernization is transforming the operation and management of electric distribution systems from manual, paper-driven business processes to electronic, computer-assisted decisionmaking. At the center of this business transformation is the distribution management system (DMS), which provides a foundation from which optimal levels of performance can be achieved in an increasingly complex business and operating environment. Electric distribution utilities are facing many new challenges that are dramatically increasing the complexity of operating and managing the electric distribution system: growing customer expectations for service reliability and power quality, pressure to achieve better efficiency and utilization of existing distribution system assets, and reductionmore » of greenhouse gas emissions by accommodating high penetration levels of distributed generating resources powered by renewable energy sources (wind, solar, etc.). Recent “storm of the century” events in the northeastern United States and the lengthy power outages and customer hardships that followed have greatly elevated the need to make power delivery systems more resilient to major storm events and to provide a more effective electric utility response during such regional power grid emergencies. Despite these newly emerging challenges for electric distribution system operators, only a small percentage of electric utilities have actually implemented a DMS. This paper discusses reasons why a DMS is needed and why the DMS may emerge as a mission-critical system that will soon be considered essential as electric utilities roll out their grid modernization strategies.« less

  5. The journey from forensic to predictive materials science using density functional theory

    DOE PAGES

    Schultz, Peter A.

    2017-09-12

    Approximate methods for electronic structure, implemented in sophisticated computer codes and married to ever-more powerful computing platforms, have become invaluable in chemistry and materials science. The maturing and consolidation of quantum chemistry codes since the 1980s, based upon explicitly correlated electronic wave functions, has made them a staple of modern molecular chemistry. Here, the impact of first principles electronic structure in physics and materials science had lagged owing to the extra formal and computational demands of bulk calculations.

  6. The journey from forensic to predictive materials science using density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Peter A.

    Approximate methods for electronic structure, implemented in sophisticated computer codes and married to ever-more powerful computing platforms, have become invaluable in chemistry and materials science. The maturing and consolidation of quantum chemistry codes since the 1980s, based upon explicitly correlated electronic wave functions, has made them a staple of modern molecular chemistry. Here, the impact of first principles electronic structure in physics and materials science had lagged owing to the extra formal and computational demands of bulk calculations.

  7. Teaching Electronics and Laboratory Automation Using Microcontroller Boards

    ERIC Educational Resources Information Center

    Mabbott, Gary A.

    2014-01-01

    Modern microcontroller boards offer the analytical chemist a powerful and inexpensive means of interfacing computers and laboratory equipment. The availability of a host of educational materials, compatible sensors, and electromechanical devices make learning to implement microcontrollers fun and empowering. This article describes the advantages…

  8. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  9. Research on computer-aided design of modern marine power systems

    NASA Astrophysics Data System (ADS)

    Ding, Dongdong; Zeng, Fanming; Chen, Guojun

    2004-03-01

    To make the MPS (Marine Power System) design process more economical and easier, a new CAD scheme is brought forward which takes much advantage of VR (Virtual Reality) and AI (Artificial Intelligence) technologies. This CAD system can shorten the period of design and reduce the requirements on designers' experience in large scale. And some key issues like the selection of hardware and software of such a system are discussed.

  10. The Amber Biomolecular Simulation Programs

    PubMed Central

    CASE, DAVID A.; CHEATHAM, THOMAS E.; DARDEN, TOM; GOHLKE, HOLGER; LUO, RAY; MERZ, KENNETH M.; ONUFRIEV, ALEXEY; SIMMERLING, CARLOS; WANG, BING; WOODS, ROBERT J.

    2006-01-01

    We describe the development, current features, and some directions for future development of the Amber package of computer programs. This package evolved from a program that was constructed in the late 1970s to do Assisted Model Building with Energy Refinement, and now contains a group of programs embodying a number of powerful tools of modern computational chemistry, focused on molecular dynamics and free energy calculations of proteins, nucleic acids, and carbohydrates. PMID:16200636

  11. New Trends in E-Science: Machine Learning and Knowledge Discovery in Databases

    NASA Astrophysics Data System (ADS)

    Brescia, Massimo

    2012-11-01

    Data mining, or Knowledge Discovery in Databases (KDD), while being the main methodology to extract the scientific information contained in Massive Data Sets (MDS), needs to tackle crucial problems since it has to orchestrate complex challenges posed by transparent access to different computing environments, scalability of algorithms, reusability of resources. To achieve a leap forward for the progress of e-science in the data avalanche era, the community needs to implement an infrastructure capable of performing data access, processing and mining in a distributed but integrated context. The increasing complexity of modern technologies carried out a huge production of data, whose related warehouse management and the need to optimize analysis and mining procedures lead to a change in concept on modern science. Classical data exploration, based on local user own data storage and limited computing infrastructures, is no more efficient in the case of MDS, worldwide spread over inhomogeneous data centres and requiring teraflop processing power. In this context modern experimental and observational science requires a good understanding of computer science, network infrastructures, Data Mining, etc. i.e. of all those techniques which fall into the domain of the so called e-science (recently assessed also by the Fourth Paradigm of Science). Such understanding is almost completely absent in the older generations of scientists and this reflects in the inadequacy of most academic and research programs. A paradigm shift is needed: statistical pattern recognition, object oriented programming, distributed computing, parallel programming need to become an essential part of scientific background. A possible practical solution is to provide the research community with easy-to understand, easy-to-use tools, based on the Web 2.0 technologies and Machine Learning methodology. Tools where almost all the complexity is hidden to the final user, but which are still flexible and able to produce efficient and reliable scientific results. All these considerations will be described in the detail in the chapter. Moreover, examples of modern applications offering to a wide variety of e-science communities a large spectrum of computational facilities to exploit the wealth of available massive data sets and powerful machine learning and statistical algorithms will be also introduced.

  12. A discrete Fourier transform for virtual memory machines

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1992-01-01

    An algebraic theory of the Discrete Fourier Transform is developed in great detail. Examination of the details of the theory leads to a computationally efficient fast Fourier transform for the use on computers with virtual memory. Such an algorithm is of great use on modern desktop machines. A FORTRAN coded version of the algorithm is given for the case when the sequence of numbers to be transformed is a power of two.

  13. A computer network with scada and case tools for on-line process control in greenhouses

    NASA Astrophysics Data System (ADS)

    Gieling, Th. H.; van Meurs, W. Th. M.; Janssen, H. J. J.

    Climate control computers in greenhouses are used to control heating and ventilation, supply water and dilute and dispense nutrients. They integrate models into optimally controlled systems. This paper describes how information technology, as in use in other sectors of industry, is applied to greenhouse control. The introduction of modern software and hardware concepts in horticulture adds power and extra opportunities to climate control in greenhouses.

  14. A computer network with SCADA and case tools for on-line process control in greenhouses.

    PubMed

    Gieling ThH; van Meurs WTh; Janssen, H J

    1996-01-01

    Climate control computers in greenhouses are used to control heating and ventilation, supply water and dilute and dispense nutrients. They integrate models into optimally controlled systems. This paper describes how information technology, as in use in other sectors of industry, is applied to greenhouse control. The introduction of modern software and hardware concepts in horticulture adds power and extra oppurtunities to climate contol in greenhouses.

  15. An on-line reactivity and power monitor for a TRIGA reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binney, Stephen E.; Bakir, Alia J.

    1988-07-01

    As the personal computer (PC) becomes more and more of a significant influence on modern technology, it is reasonable that at some point in time they would be used to interface with TRIGA reactors. A personal computer with a special interface board has been used to monitor key parameters during operation of the Oregon State University TRIGA Reactor (OSTR). A description of the apparatus used and sample results are included.

  16. The Challenge of Computer Furniture.

    ERIC Educational Resources Information Center

    Dolan, Thomas G.

    2003-01-01

    Explains that classrooms and school furniture were built for a different era and often do not have sufficient power for technology, discussing what is needed to support modern technology in education. One solution involves modular cabling and furniture that is capable of being rearranged. Currently, there are no comprehensive standards from which…

  17. Mobile high-performance computing (HPC) for synthetic aperture radar signal processing

    NASA Astrophysics Data System (ADS)

    Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen

    2018-04-01

    The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.

  18. Application of SLURM, BOINC, and GlusterFS as Software System for Sustainable Modeling and Data Analytics

    NASA Astrophysics Data System (ADS)

    Kashansky, Vladislav V.; Kaftannikov, Igor L.

    2018-02-01

    Modern numerical modeling experiments and data analytics problems in various fields of science and technology reveal a wide variety of serious requirements for distributed computing systems. Many scientific computing projects sometimes exceed the available resource pool limits, requiring extra scalability and sustainability. In this paper we share the experience and findings of our own on combining the power of SLURM, BOINC and GlusterFS as software system for scientific computing. Especially, we suggest a complete architecture and highlight important aspects of systems integration.

  19. MDTraj: A Modern Open Library for the Analysis of Molecular Dynamics Trajectories.

    PubMed

    McGibbon, Robert T; Beauchamp, Kyle A; Harrigan, Matthew P; Klein, Christoph; Swails, Jason M; Hernández, Carlos X; Schwantes, Christian R; Wang, Lee-Ping; Lane, Thomas J; Pande, Vijay S

    2015-10-20

    As molecular dynamics (MD) simulations continue to evolve into powerful computational tools for studying complex biomolecular systems, the necessity of flexible and easy-to-use software tools for the analysis of these simulations is growing. We have developed MDTraj, a modern, lightweight, and fast software package for analyzing MD simulations. MDTraj reads and writes trajectory data in a wide variety of commonly used formats. It provides a large number of trajectory analysis capabilities including minimal root-mean-square-deviation calculations, secondary structure assignment, and the extraction of common order parameters. The package has a strong focus on interoperability with the wider scientific Python ecosystem, bridging the gap between MD data and the rapidly growing collection of industry-standard statistical analysis and visualization tools in Python. MDTraj is a powerful and user-friendly software package that simplifies the analysis of MD data and connects these datasets with the modern interactive data science software ecosystem in Python. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  20. MDTraj: A Modern Open Library for the Analysis of Molecular Dynamics Trajectories

    PubMed Central

    McGibbon, Robert T.; Beauchamp, Kyle A.; Harrigan, Matthew P.; Klein, Christoph; Swails, Jason M.; Hernández, Carlos X.; Schwantes, Christian R.; Wang, Lee-Ping; Lane, Thomas J.; Pande, Vijay S.

    2015-01-01

    As molecular dynamics (MD) simulations continue to evolve into powerful computational tools for studying complex biomolecular systems, the necessity of flexible and easy-to-use software tools for the analysis of these simulations is growing. We have developed MDTraj, a modern, lightweight, and fast software package for analyzing MD simulations. MDTraj reads and writes trajectory data in a wide variety of commonly used formats. It provides a large number of trajectory analysis capabilities including minimal root-mean-square-deviation calculations, secondary structure assignment, and the extraction of common order parameters. The package has a strong focus on interoperability with the wider scientific Python ecosystem, bridging the gap between MD data and the rapidly growing collection of industry-standard statistical analysis and visualization tools in Python. MDTraj is a powerful and user-friendly software package that simplifies the analysis of MD data and connects these datasets with the modern interactive data science software ecosystem in Python. PMID:26488642

  1. Digital Waveguide Architectures for Virtual Musical Instruments

    NASA Astrophysics Data System (ADS)

    Smith, Julius O.

    Digital sound synthesis has become a standard staple of modern music studios, videogames, personal computers, and hand-held devices. As processing power has increased over the years, sound synthesis implementations have evolved from dedicated chip sets, to single-chip solutions, and ultimately to software implementations within processors used primarily for other tasks (such as for graphics or general purpose computing). With the cost of implementation dropping closer and closer to zero, there is increasing room for higher quality algorithms.

  2. Educate at Penn State: Preparing Beginning Teachers with Powerful Digital Tools

    ERIC Educational Resources Information Center

    Murray, Orrin T.; Zembal-Saul, Carla

    2008-01-01

    University based teacher education programs are slowly beginning to catch up to other professional programs that use modern digital tools to prepare students to enter professional fields. This discussion looks at how one teacher education program reached the conclusion that students and faculty would use notebook computers. Frequently referred to…

  3. Alpha Control - A new Concept in SPM Control

    NASA Astrophysics Data System (ADS)

    Spizig, P.; Sanchen, D.; Volswinkler, G.; Ibach, W.; Koenen, J.

    2006-03-01

    Controlling modern Scanning Probe Microscopes demands highly sophisticated electronics. While flexibility and powerful computing power is of great importance in facilitating the variety of measurement modes, extremely low noise is also a necessity. Accordingly, modern SPM Controller designs are based on digital electronics to overcome the drawbacks of analog designs. While todays SPM controllers are based on DSPs or Microprocessors and often still incorporate analog parts, we are now introducing a completely new approach: Using a Field Programmable Gate Array (FPGA) to implement the digital control tasks allows unrivalled data processing speed by computing all tasks in parallel within a single chip. Time consuming task switching between data acquisition, digital filtering, scanning and the computing of feedback signals can be completely avoided. Together with a star topology to avoid any bus limitations in accessing the variety of ADCs and DACs, this design guarantees for the first time an entirely deterministic timing capability in the nanosecond regime for all tasks. This becomes especially useful for any external experiments which must be synchronized with the scan or for high speed scans that require not only closed loop control of the scanner, but also dynamic correction of the scan movement. Delicate samples additionally benefit from extremely high sample rates, allowing highly resolved signals and low noise levels.

  4. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  5. Self-powered information measuring wireless networks using the distribution of tasks within multicore processors

    NASA Astrophysics Data System (ADS)

    Zhuravska, Iryna M.; Koretska, Oleksandra O.; Musiyenko, Maksym P.; Surtel, Wojciech; Assembay, Azat; Kovalev, Vladimir; Tleshova, Akmaral

    2017-08-01

    The article contains basic approaches to develop the self-powered information measuring wireless networks (SPIM-WN) using the distribution of tasks within multicore processors critical applying based on the interaction of movable components - as in the direction of data transmission as wireless transfer of energy coming from polymetric sensors. Base mathematic model of scheduling tasks within multiprocessor systems was modernized to schedule and allocate tasks between cores of one-crystal computer (SoC) to increase energy efficiency SPIM-WN objects.

  6. Parallel algorithm for computation of second-order sequential best rotations

    NASA Astrophysics Data System (ADS)

    Redif, Soydan; Kasap, Server

    2013-12-01

    Algorithms for computing an approximate polynomial matrix eigenvalue decomposition of para-Hermitian systems have emerged as a powerful, generic signal processing tool. A technique that has shown much success in this regard is the sequential best rotation (SBR2) algorithm. Proposed is a scheme for parallelising SBR2 with a view to exploiting the modern architectural features and inherent parallelism of field-programmable gate array (FPGA) technology. Experiments show that the proposed scheme can achieve low execution times while requiring minimal FPGA resources.

  7. From Geocentrism to Allocentrism: Teaching the Phases of the Moon in a Digital Full-Dome Planetarium

    ERIC Educational Resources Information Center

    Chastenay, Pierre

    2016-01-01

    An increasing number of planetariums worldwide are turning digital, using ultra-fast computers, powerful graphic cards, and high-resolution video projectors to create highly realistic astronomical imagery in real time. This modern technology makes it so that the audience can observe astronomical phenomena from a geocentric as well as an…

  8. Computing at the speed limit (supercomputers)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhard, R.

    1982-07-01

    The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less

  9. ECG R-R peak detection on mobile phones.

    PubMed

    Sufi, F; Fang, Q; Cosic, I

    2007-01-01

    Mobile phones have become an integral part of modern life. Due to the ever increasing processing power, mobile phones are rapidly expanding its arena from a sole device of telecommunication to organizer, calculator, gaming device, web browser, music player, audio/video recording device, navigator etc. The processing power of modern mobile phones has been utilized by many innovative purposes. In this paper, we are proposing the utilization of mobile phones for monitoring and analysis of biosignal. The computation performed inside the mobile phone's processor will now be exploited for healthcare delivery. We performed literature review on RR interval detection from ECG and selected few PC based algorithms. Then, three of those existing RR interval detection algorithms were programmed on Java platform. Performance monitoring and comparison studies were carried out on three different mobile devices to determine their application on a realtime telemonitoring scenario.

  10. Implementing WebGL and HTML5 in Macromolecular Visualization and Modern Computer-Aided Drug Design.

    PubMed

    Yuan, Shuguang; Chan, H C Stephen; Hu, Zhenquan

    2017-06-01

    Web browsers have long been recognized as potential platforms for remote macromolecule visualization. However, the difficulty in transferring large-scale data to clients and the lack of native support for hardware-accelerated applications in the local browser undermine the feasibility of such utilities. With the introduction of WebGL and HTML5 technologies in recent years, it is now possible to exploit the power of a graphics-processing unit (GPU) from a browser without any third-party plugin. Many new tools have been developed for biological molecule visualization and modern drug discovery. In contrast to traditional offline tools, real-time computing, interactive data analysis, and cross-platform analyses feature WebGL- and HTML5-based tools, facilitating biological research in a more efficient and user-friendly way. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Computational protein design-the next generation tool to expand synthetic biology applications.

    PubMed

    Gainza-Cirauqui, Pablo; Correia, Bruno Emanuel

    2018-05-02

    One powerful approach to engineer synthetic biology pathways is the assembly of proteins sourced from one or more natural organisms. However, synthetic pathways often require custom functions or biophysical properties not displayed by natural proteins, limitations that could be overcome through modern protein engineering techniques. Structure-based computational protein design is a powerful tool to engineer new functional capabilities in proteins, and it is beginning to have a profound impact in synthetic biology. Here, we review efforts to increase the capabilities of synthetic biology using computational protein design. We focus primarily on computationally designed proteins not only validated in vitro, but also shown to modulate different activities in living cells. Efforts made to validate computational designs in cells can illustrate both the challenges and opportunities in the intersection of protein design and synthetic biology. We also highlight protein design approaches, which although not validated as conveyors of new cellular function in situ, may have rapid and innovative applications in synthetic biology. We foresee that in the near-future, computational protein design will vastly expand the functional capabilities of synthetic cells. Copyright © 2018. Published by Elsevier Ltd.

  12. The environment power system analysis tool development program

    NASA Technical Reports Server (NTRS)

    Jongeward, Gary A.; Kuharski, Robert A.; Kennedy, Eric M.; Stevens, N. John; Putnam, Rand M.; Roche, James C.; Wilcox, Katherine G.

    1990-01-01

    The Environment Power System Analysis Tool (EPSAT) is being developed to provide space power system design engineers with an analysis tool for determining system performance of power systems in both naturally occurring and self-induced environments. The program is producing an easy to use computer aided engineering (CAE) tool general enough to provide a vehicle for technology transfer from space scientists and engineers to power system design engineers. The results of the project after two years of a three year development program are given. The EPSAT approach separates the CAE tool into three distinct functional units: a modern user interface to present information, a data dictionary interpreter to coordinate analysis; and a data base for storing system designs and results of analysis.

  13. NREL's Cybersecurity Initiative Aims to Wall Off the Smart Grid from

    Science.gov Websites

    provided the Energy Department with $4.5 billion to modernize the electric power grid. One key to this possible. As just one example, in typical computer-based communications systems, like the Internet, data is found only one vulnerability, which was due to a misconfigured device. Through just that one error, the

  14. A Technical Review of Cellular Radio and Analysis of a Possible Protocol

    DTIC Science & Technology

    1992-09-01

    9 1. The Pioneers ............. ................................... 9 2. Time line of Radio Evolution...cellular telephone. Advances in low-power radio transmission and the speed with which modern computers can aid in frequency management and signal...lecturer at the Royal Institution in London. He subsequently worked his way up to lecturer and devoted ever increasing amounts of time to experiments

  15. Supercomputing with toys: harnessing the power of NVIDIA 8800GTX and playstation 3 for bioinformatics problem.

    PubMed

    Wilson, Justin; Dai, Manhong; Jakupovic, Elvis; Watson, Stanley; Meng, Fan

    2007-01-01

    Modern video cards and game consoles typically have much better performance to price ratios than that of general purpose CPUs. The parallel processing capabilities of game hardware are well-suited for high throughput biomedical data analysis. Our initial results suggest that game hardware is a cost-effective platform for some computationally demanding bioinformatics problems.

  16. Dense and Sparse Matrix Operations on the Cell Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid

    2005-05-01

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. Therefore, the high performance computing community is examining alternative architectures that address the limitations of modern superscalar designs. In this work, we examine STI's forthcoming Cell processor: a novel, low-power architecture that combines a PowerPC core with eight independent SIMD processing units coupled with a software-controlled memory to offer high FLOP/s/Watt. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop an analytic framework to predict Cell performance on dense and sparse matrix operations, usingmore » a variety of algorithmic approaches. Results demonstrate Cell's potential to deliver more than an order of magnitude better GFLOP/s per watt performance, when compared with the Intel Itanium2 and Cray X1 processors.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peratt, A.L.; Mostrom, M.A.

    With the availability of 80--125 MHz microprocessors, the methodology developed for the simulation of problems in pulsed power and plasma physics on modern day supercomputers is now amenable to application on a wide range of platforms including laptops and workstations. While execution speeds with these processors do not match those of large scale computing machines, resources such as computer-aided-design (CAD) and graphical analysis codes are available to automate simulation setup and process data. This paper reports on the adaptation of IVORY, a three-dimensional, fully-electromagnetic, particle-in-cell simulation code, to this platform independent CAD environment. The primary purpose of this talk ismore » to demonstrate how rapidly a pulsed power/plasma problem can be scoped out by an experimenter on a dedicated workstation. Demonstrations include a magnetically insulated transmission line, power flow in a graded insulator stack, a relativistic klystron oscillator, and the dynamics of a coaxial thruster for space applications.« less

  18. Turbulent heat transfer performance of single stage turbine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amano, R.S.; Song, B.

    1999-07-01

    To increase the efficiency and the power of modern power plant gas turbines, designers are continually trying to raise the maximum turbine inlet temperature. Here, a numerical study based on the Navier-Stokes equations on a three-dimensional turbulent flow in a single stage turbine stator/rotor passage has been conducted and reported in this paper. The full Reynolds-stress closure model (RSM) was used for the computations and the results were also compared with the computations made by using the Launder-Sharma low-Reynolds-number {kappa}-{epsilon} model. The computational results obtained using these models were compared in order to investigate the turbulence effect in the near-wallmore » region. The set of the governing equations in a generalized curvilinear coordinate system was discretized by using the finite volume method with non-staggered grids. The numerical modeling was performed to interact between the stator and rotor blades.« less

  19. Physics and Robotic Sensing -- the good, the bad, and approaches to making it work

    NASA Astrophysics Data System (ADS)

    Huff, Brian

    2011-03-01

    All of the technological advances that have benefited consumer electronics have direct application to robotics. Technological advances have resulted in the dramatic reduction in size, cost, and weight of computing systems, while simultaneously doubling computational speed every eighteen months. The same manufacturing advancements that have enabled this rapid increase in computational power are now being leveraged to produce small, powerful and cost-effective sensing technologies applicable for use in mobile robotics applications. Despite the increase in computing and sensing resources available to today's robotic systems developers, there are sensing problems typically found in unstructured environments that continue to frustrate the widespread use of robotics and unmanned systems. This talk presents how physics has contributed to the creation of the technologies that are making modern robotics possible. The talk discusses theoretical approaches to robotic sensing that appear to suffer when they are deployed in the real world. Finally the author presents methods being used to make robotic sensing more robust.

  20. Electron tubes for industrial applications

    NASA Astrophysics Data System (ADS)

    Gellert, Bernd

    1994-05-01

    This report reviews research and development efforts within the last years for vacuum electron tubes, in particular power grid tubes for industrial applications. Physical and chemical effects are discussed that determine the performance of todays devices. Due to the progress made in the fundamental understanding of materials and newly developed processes the reliability and reproducibility of power grid tubes could be improved considerably. Modern computer controlled manufacturing methods ensure a high reproducibility of production and continuous quality certification according to ISO 9001 guarantees future high quality standards. Some typical applications of these tubes are given as an example.

  1. Physics and engineering aspects of cell and tissue imaging systems: microscopic devices and computer assisted diagnosis.

    PubMed

    Chen, Xiaodong; Ren, Liqiang; Zheng, Bin; Liu, Hong

    2013-01-01

    The conventional optical microscopes have been used widely in scientific research and in clinical practice. The modern digital microscopic devices combine the power of optical imaging and computerized analysis, archiving and communication techniques. It has a great potential in pathological examinations for improving the efficiency and accuracy of clinical diagnosis. This chapter reviews the basic optical principles of conventional microscopes, fluorescence microscopes and electron microscopes. The recent developments and future clinical applications of advanced digital microscopic imaging methods and computer assisted diagnosis schemes are also discussed.

  2. An introduction to real-time graphical techniques for analyzing multivariate data

    NASA Astrophysics Data System (ADS)

    Friedman, Jerome H.; McDonald, John Alan; Stuetzle, Werner

    1987-08-01

    Orion I is a graphics system used to study applications of computer graphics - especially interactive motion graphics - in statistics. Orion I is the newest of a family of "Prim" systems, whose most striking common feature is the use of real-time motion graphics to display three dimensional scatterplots. Orion I differs from earlier Prim systems through the use of modern and relatively inexpensive raster graphics and microprocessor technology. It also delivers more computing power to its user; Orion I can perform more sophisticated real-time computations than were possible on previous such systems. We demonstrate some of Orion I's capabilities in our film: "Exploring data with Orion I".

  3. Gamification and serious games for personalized health.

    PubMed

    McCallum, Simon

    2012-01-01

    Computer games are no longer just a trivial activity played by children in arcades. Social networking and casual gaming have broadened the market for, and acceptance of, games. This has coincided with a realization of their power to engage and motivate players. Good computer games are excellent examples of modern educational theory [1]. The military, health providers, governments, and educators, all use computer games. This paper focuses on Games for Health, discussing the range of areas and approaches to developing these games. We extend a taxonomy for Games for Health, describe a case study on games for dementia sufferers, and finally, present some challenges and research opportunities in this area.

  4. DEEP-SaM - Energy-Efficient Provisioning Policies for Computing Environments

    NASA Astrophysics Data System (ADS)

    Bodenstein, Christian; Püschel, Tim; Hedwig, Markus; Neumann, Dirk

    The cost of electricity for datacenters is a substantial operational cost that can and should be managed, not only for saving energy, but also due to the ecologic commitment inherent to power consumption. Often, pursuing this goal results in chronic underutilization of resources, a luxury most resource providers do not have in light of their corporate commitments. This work proposes, formalizes and numerically evaluates DEEP-Sam, for clearing provisioning markets, based on the maximization of welfare, subject to utility-level dependant energy costs and customer satisfaction levels. We focus specifically on linear power models, and the implications of the inherent fixed costs related to energy consumption of modern datacenters and cloud environments. We rigorously test the model by running multiple simulation scenarios and evaluate the results critically. We conclude with positive results and implications for long-term sustainable management of modern datacenters.

  5. Multi-GPU Jacobian accelerated computing for soft-field tomography.

    PubMed

    Borsic, A; Attardo, E A; Halter, R J

    2012-10-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on four GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 min to 14 s. We regard this as an important step toward gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for EIT, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the adjoint method.

  6. Multi-GPU Jacobian Accelerated Computing for Soft Field Tomography

    PubMed Central

    Borsic, A.; Attardo, E. A.; Halter, R. J.

    2012-01-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use Finite Element Models to represent the volume of interest and to solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are three-dimensional. Though the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in Electrical Impedance Tomography applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15 to 20 minutes with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Further, providing high-speed reconstructions are essential for some promising clinical application of EIT. For 3D problems 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In the present work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have a much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on 4 GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 minutes to 14 seconds. We regard this as an important step towards gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for Electrical Impedance Tomography, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the Adjoint Method. PMID:23010857

  7. Attributes and National Behavior, Part 2: Modern International Relations Monograph Series. Patterns of Conflict: Relative Status-Field Theory, TT Actors.

    ERIC Educational Resources Information Center

    Vincent, Jack E.

    This monograph presents the computer printout of an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis to 'topdog'…

  8. Attributes and National Behavior, Part 2: Modern International Relations Monograph Series. Patterns of Conflict: Relative Status-Field Theory, TU Actors.

    ERIC Educational Resources Information Center

    Vincent, Jack E.

    This monograph presents the computer printout of an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis to combined…

  9. Attributes and National Behavior, Part 2: Modern International Relations Monograph Series. Patterns of Cooperation: Relative Status-Field Theory, UU Actors.

    ERIC Educational Resources Information Center

    Vincent, Jack E.

    This monograph presents an analysis of data on international cooperation over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents the computer printout of data on the application of second stage factor analysis of 'underdog'…

  10. Attributes and National Behavior, Part 2: Modern International Relations Monograph Series. Patterns of Cooperation: Relative Status-Field Theory, UT Actors.

    ERIC Educational Resources Information Center

    Vincent, Jack E.

    This monograph presents the computer printout of an analysis of data on international cooperation over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis of combined…

  11. Attributes and National Behavior, Part 2: Modern International Relations Monograph Series. Patterns of Conflict: Relative Status-Field Theory, UU Actors.

    ERIC Educational Resources Information Center

    Vincent, Jack E.

    This monograph presents the computer printout of an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis of 'underdog'…

  12. Attributes and National Behavior, Part 2: Modern International Relations Monograph Series. Patterns of Cooperation: Relative Status-Field Theory, TU Actors.

    ERIC Educational Resources Information Center

    Vincent, Jack E.

    This monograph presents the computer printout of an analysis of data on international cooperation over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis to combined…

  13. Attributes and National Behavior, Part 2: Modern International Relations Monograph Series. Patterns of Conflict: Relative Status-Field Theory, UT Actors.

    ERIC Educational Resources Information Center

    Vincent, Jack E.

    This monograph presents the computer printout of an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of second stage factor analysis of combined…

  14. GPU applications for data processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch; Aleksandrov, Andrey; INFN sezione di Napoli, I-80125 Napoli

    2015-12-31

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  15. Towards quantum chemistry on a quantum computer.

    PubMed

    Lanyon, B P; Whitfield, J D; Gillett, G G; Goggin, M E; Almeida, M P; Kassal, I; Biamonte, J D; Mohseni, M; Powell, B J; Barbieri, M; Aspuru-Guzik, A; White, A G

    2010-02-01

    Exact first-principles calculations of molecular properties are currently intractable because their computational cost grows exponentially with both the number of atoms and basis set size. A solution is to move to a radically different model of computing by building a quantum computer, which is a device that uses quantum systems themselves to store and process data. Here we report the application of the latest photonic quantum computer technology to calculate properties of the smallest molecular system: the hydrogen molecule in a minimal basis. We calculate the complete energy spectrum to 20 bits of precision and discuss how the technique can be expanded to solve large-scale chemical problems that lie beyond the reach of modern supercomputers. These results represent an early practical step toward a powerful tool with a broad range of quantum-chemical applications.

  16. GIER: A Danish computer from 1961 with a role in the modern revolution of astronomy - II

    NASA Astrophysics Data System (ADS)

    Høg, Erik

    2018-04-01

    A Danish computer, GIER, from 1961 played a vital role in the development of a new method for astrometric measurement. This method, photon counting astrometry, ultimately led to two satellites with a significant role in the modern revolution of astronomy. A GIER was installed at the Hamburg Observatory in 1964 where it was used to implement the entirely new method for the measurement of stellar positions by means of a meridian circle, at that time the fundamental instrument of astrometry. An expedition to Perth in Western Australia with the instrument and the computer was a success. This method was also implemented in space in the first ever astrometric satellite Hipparcos launched by ESA in 1989. The Hipparcos results published in 1997 revolutionized astrometry with an impact in all branches of astronomy from the solar system and stellar structure to cosmic distances and the dynamics of the Milky Way. In turn, the results paved the way for a successor, the one million times more powerful Gaia astrometry satellite launched by ESA in 2013. Preparations for a Gaia successor in twenty years are making progress.

  17. HPC Annual Report 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennig, Yasmin

    Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructuremore » and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.« less

  18. Robotics in medicine

    NASA Astrophysics Data System (ADS)

    Kuznetsov, D. N.; Syryamkin, V. I.

    2015-11-01

    Modern technologies play a very important role in our lives. It is hard to imagine how people can get along without personal computers, and companies - without powerful computer centers. Nowadays, many devices make modern medicine more effective. Medicine is developing constantly, so introduction of robots in this sector is a very promising activity. Advances in technology have influenced medicine greatly. Robotic surgery is now actively developing worldwide. Scientists have been carrying out research and practical attempts to create robotic surgeons for more than 20 years, since the mid-80s of the last century. Robotic assistants play an important role in modern medicine. This industry is new enough and is at the early stage of development; despite this, some developments already have worldwide application; they function successfully and bring invaluable help to employees of medical institutions. Today, doctors can perform operations that seemed impossible a few years ago. Such progress in medicine is due to many factors. First, modern operating rooms are equipped with up-to-date equipment, allowing doctors to make operations more accurately and with less risk to the patient. Second, technology has enabled to improve the quality of doctors' training. Various types of robots exist now: assistants, military robots, space, household and medical, of course. Further, we should make a detailed analysis of existing types of robots and their application. The purpose of the article is to illustrate the most popular types of robots used in medicine.

  19. Attributes and National Behavior, Part 2: Modern International Relations Monograph Series. Relative Status-Field Theory, Results for Conflict, TU Actors, 1966-1969, An Inventory of Findings.

    ERIC Educational Resources Information Center

    Vincent, Jack E.

    This monograph presents findings from an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents the computer printout of data on the application of discriminant function analysis of…

  20. Attributes and National Behavior, Part 2: Modern International Relations Monograph Series. Relative Status-Field Theory, Results for Conflict, TT Actors, 1966-69, An Inventory of Findings.

    ERIC Educational Resources Information Center

    Vincent, Jack E.

    This monograph presents findings on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents a computer printout of data regarding 'topdog' behavior among nations with regard to economic development and…

  1. Attributes and National Behavior, Part 2: Modern International Relations Monograph Series. Relative Status-Field Theory, Results for Cooperation, UU Actors, 1966-1969, An Inventory of Findings.

    ERIC Educational Resources Information Center

    Vincent, Jack E.

    This monograph presents findings from an analysis of data on international cooperation over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents the computer printout of data on the application of discriminant function analysis…

  2. A low cost amplifier and acquisition system for cortical-electroncephalography in non-human applications.

    PubMed

    Viggiano, A; Coppola, G

    2014-04-01

    A simple circuit is described to make an AC-amplifier and an analog-to-digital converter in a single, compact solution, for use in basic research, but not on humans. The circuit sends data to and is powered from a common USB port of modern computers; using proper firmware and driver the communication with the device is an emulated RS232 serial port.

  3. A Low Cost Amplifier and Acquisition System for Cortical-Electroncephalography in Non-Human Applications

    PubMed Central

    Viggiano, A; Coppola, G

    2014-01-01

    A simple circuit is described to make an AC-amplifier and an analog-to-digital converter in a single, compact solution, for use in basic research, but not on humans. The circuit sends data to and is powered from a common USB port of modern computers; using proper firmware and driver the communication with the device is an emulated RS232 serial port. PMID:24809030

  4. Modern Microwave and Millimeter-Wave Power Electronics

    NASA Astrophysics Data System (ADS)

    Barker, Robert J.; Luhmann, Neville C.; Booske, John H.; Nusinovich, Gregory S.

    2005-04-01

    A comprehensive study of microwave vacuum electronic devices and their current and future applications While both vacuum and solid-state electronics continue to evolve and provide unique solutions, emerging commercial and military applications that call for higher power and higher frequencies to accommodate massive volumes of transmitted data are the natural domain of vacuum electronics technology. Modern Microwave and Millimeter-Wave Power Electronics provides systems designers, engineers, and researchers-especially those with primarily solid-state training-with a thoroughly up-to-date survey of the rich field of microwave vacuum electronic device (MVED) technology. This book familiarizes the R&D and academic communities with the capabilities and limitations of MVED and highlights the exciting scientific breakthroughs of the past decade that are dramatically increasing the compactness, efficiency, cost-effectiveness, and reliability of this entire class of devices. This comprehensive text explores a wide range of topics: * Traveling-wave tubes, which form the backbone of satellite and airborne communications, as well as of military electronic countermeasures systems * Microfabricated MVEDs and advanced electron beam sources * Klystrons, gyro-amplifiers, and crossed-field devices * "Virtual prototyping" of MVEDs via advanced 3-D computational models * High-Power Microwave (HPM) sources * Next-generation microwave structures and circuits * How to achieve linear amplification * Advanced materials technologies for MVEDs * A Web site appendix providing a step-by-step walk-through of a typical MVED design process Concluding with an in-depth examination of emerging applications and future possibilities for MVEDs, Modern Microwave and Millimeter-Wave Power Electronics ensures that systems designers and engineers understand and utilize the significant potential of this mature, yet continually developing technology. SPECIAL NOTE: All of the editors' royalties realized from the sale of this book will fund the future research and publication activities of graduate students in the vacuum electronics field.

  5. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  6. A full 3D-navigation system in a suitcase.

    PubMed

    Freysinger, W; Truppe, M J; Gunkel, A R; Thumfart, W F

    2001-01-01

    To reduce the impact of contemporary 3D-navigation systems on the environment of typical otorhinolaryngologic operating rooms, we demonstrate that a transfer of navigation software to modern high-power notebook computers is feasible and results in a practicable way to provide positional information to a surgeon intraoperatively. The ARTMA Virtual Patient System has been implemented on a Macintosh PowerBook G3 and, in connection with the Polhemus FASTRAK digitizer, provides intraoperative positional information during endoscopic endonasal surgery. Satisfactory intraoperative navigation has been realized in two- and three-dimensional medical image data sets (i.e., X-ray, ultrasound images, CT, and MR) and live video. This proof-of-concept study demonstrates that acceptable ergonomics and excellent performance of the system can be achieved with contemporary high-end notebook computers. Copyright 2001 Wiley-Liss, Inc.

  7. [Computational chemistry in structure-based drug design].

    PubMed

    Cao, Ran; Li, Wei; Sun, Han-Zi; Zhou, Yu; Huang, Niu

    2013-07-01

    Today, the understanding of the sequence and structure of biologically relevant targets is growing rapidly and researchers from many disciplines, physics and computational science in particular, are making significant contributions to modern biology and drug discovery. However, it remains challenging to rationally design small molecular ligands with desired biological characteristics based on the structural information of the drug targets, which demands more accurate calculation of ligand binding free-energy. With the rapid advances in computer power and extensive efforts in algorithm development, physics-based computational chemistry approaches have played more important roles in structure-based drug design. Here we reviewed the newly developed computational chemistry methods in structure-based drug design as well as the elegant applications, including binding-site druggability assessment, large scale virtual screening of chemical database, and lead compound optimization. Importantly, here we address the current bottlenecks and propose practical solutions.

  8. Computer problem-solving coaches for introductory physics: Design and usability studies

    NASA Astrophysics Data System (ADS)

    Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew

    2016-06-01

    The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how effective such coaches might be, they will only be useful if they are attractive to students. We describe the design and testing of a set of web-based computer programs that act as personal coaches to students while they practice solving problems from introductory physics. The coaches are designed to supplement regular human instruction, giving students access to effective forms of practice outside class. We present results from large-scale usability tests of the computer coaches and discuss their implications for future versions of the coaches.

  9. JINR cloud infrastructure evolution

    NASA Astrophysics Data System (ADS)

    Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.

    2016-09-01

    To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.

  10. Heterogeneous real-time computing in radio astronomy

    NASA Astrophysics Data System (ADS)

    Ford, John M.; Demorest, Paul; Ransom, Scott

    2010-07-01

    Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.

  11. Modern Radar Techniques for Geophysical Applications: Two Examples

    NASA Technical Reports Server (NTRS)

    Arokiasamy, B. J.; Bianchi, C.; Sciacca, U.; Tutone, G.; Zirizzotti, A.; Zuccheretti, E.

    2005-01-01

    The last decade of the evolution of radar was heavily influenced by the rapid increase in the information processing capabilities. Advances in solid state radio HF devices, digital technology, computing architectures and software offered the designers to develop very efficient radars. In designing modern radars the emphasis goes towards the simplification of the system hardware, reduction of overall power, which is compensated by coding and real time signal processing techniques. Radars are commonly employed in geophysical radio soundings like probing the ionosphere; stratosphere-mesosphere measurement, weather forecast, GPR and radio-glaciology etc. In the laboratorio di Geofisica Ambientale of the Istituto Nazionale di Geofisica e Vulcanologia (INGV), Rome, Italy, we developed two pulse compression radars. The first is a HF radar called AIS-INGV; Advanced Ionospheric Sounder designed both for the purpose of research and for routine service of the HF radio wave propagation forecast. The second is a VHF radar called GLACIORADAR, which will be substituting the high power envelope radar used by the Italian Glaciological group. This will be employed in studying the sub glacial structures of Antarctica, giving information about layering, the bed rock and sub glacial lakes if present. These are low power radars, which heavily rely on advanced hardware and powerful real time signal processing. Additional information is included in the original extended abstract.

  12. Attributes and National Behavior, Part 2: Modern International Relations Monograph Series. Relative Status-Field Theory, Results for Cooperation, TT Actors, 1966-69, An Inventory of Findings.

    ERIC Educational Resources Information Center

    Vincent, Jack E.

    This monograph presents findings from an analysis of data on international cooperation over a three-year period. A computer printout of the analysis is included. The document is part of a large scale research project to test various theories with regard to their power in analyzing international relations. In this monograph, an inventory is…

  13. High Performance Object-Oriented Scientific Programming in Fortran 90

    NASA Technical Reports Server (NTRS)

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  14. Intelligent redundant actuation system requirements and preliminary system design

    NASA Technical Reports Server (NTRS)

    Defeo, P.; Geiger, L. J.; Harris, J.

    1985-01-01

    Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.

  15. The exact analysis of contingency tables in medical research.

    PubMed

    Mehta, C R

    1994-01-01

    A unified view of exact nonparametric inference, with special emphasis on data in the form of contingency tables, is presented. While the concept of exact tests has been in existence since the early work of RA Fisher, the computational complexity involved in actually executing such tests precluded their use until fairly recently. Modern algorithmic advances, combined with the easy availability of inexpensive computing power, has renewed interest in exact methods of inference, especially because they remain valid in the face of small, sparse, imbalanced, or heavily tied data. After defining exact p-values in terms of the permutation principle, we reference algorithms for computing them. Several data sets are then analysed by both exact and asymptotic methods. We end with a discussion of the available software.

  16. GPU accelerated FDTD solver and its application in MRI.

    PubMed

    Chi, J; Liu, F; Jin, J; Mason, D G; Crozier, S

    2010-01-01

    The finite difference time domain (FDTD) method is a popular technique for computational electromagnetics (CEM). The large computational power often required, however, has been a limiting factor for its applications. In this paper, we will present a graphics processing unit (GPU)-based parallel FDTD solver and its successful application to the investigation of a novel B1 shimming scheme for high-field magnetic resonance imaging (MRI). The optimized shimming scheme exhibits considerably improved transmit B(1) profiles. The GPU implementation dramatically shortened the runtime of FDTD simulation of electromagnetic field compared with its CPU counterpart. The acceleration in runtime has made such investigation possible, and will pave the way for other studies of large-scale computational electromagnetic problems in modern MRI which were previously impractical.

  17. The Use of Field Programmable Gate Arrays (FPGA) in Small Satellite Communication Systems

    NASA Technical Reports Server (NTRS)

    Varnavas, Kosta; Sims, William Herbert; Casas, Joseph

    2015-01-01

    This paper will describe the use of digital Field Programmable Gate Arrays (FPGA) to contribute to advancing the state-of-the-art in software defined radio (SDR) transponder design for the emerging SmallSat and CubeSat industry and to provide advances for NASA as described in the TAO5 Communication and Navigation Roadmap (Ref 4). The use of software defined radios (SDR) has been around for a long time. A typical implementation of the SDR is to use a processor and write software to implement all the functions of filtering, carrier recovery, error correction, framing etc. Even with modern high speed and low power digital signal processors, high speed memories, and efficient coding, the compute intensive nature of digital filters, error correcting and other algorithms is too much for modern processors to get efficient use of the available bandwidth to the ground. By using FPGAs, these compute intensive tasks can be done in parallel, pipelined fashion and more efficiently use every clock cycle to significantly increase throughput while maintaining low power. These methods will implement digital radios with significant data rates in the X and Ka bands. Using these state-of-the-art technologies, unprecedented uplink and downlink capabilities can be achieved in a 1/2 U sized telemetry system. Additionally, modern FPGAs have embedded processing systems, such as ARM cores, integrated inside the FPGA allowing mundane tasks such as parameter commanding to occur easily and flexibly. Potential partners include other NASA centers, industry and the DOD. These assets are associated with small satellite demonstration flights, LEO and deep space applications. MSFC currently has an SDR transponder test-bed using Hardware-in-the-Loop techniques to evaluate and improve SDR technologies.

  18. Real-time implementation of an interactive jazz accompaniment system

    NASA Astrophysics Data System (ADS)

    Deshpande, Nikhil

    Modern computational algorithms and digital signal processing (DSP) are able to combine with human performers without forced or predetermined structure in order to create dynamic and real-time accompaniment systems. With modern computing power and intelligent algorithm layout and design, it is possible to achieve more detailed auditory analysis of live music. Using this information, computer code can follow and predict how a human's musical performance evolves, and use this to react in a musical manner. This project builds a real-time accompaniment system to perform together with live musicians, with a focus on live jazz performance and improvisation. The system utilizes a new polyphonic pitch detector and embeds it in an Ableton Live system - combined with Max for Live - to perform elements of audio analysis, generation, and triggering. The system also relies on tension curves and information rate calculations from the Creative Artificially Intuitive and Reasoning Agent (CAIRA) system to help understand and predict human improvisation. These metrics are vital to the core system and allow for extrapolated audio analysis. The system is able to react dynamically to a human performer, and can successfully accompany the human as an entire rhythm section.

  19. Trends in Programming Languages for Neuroscience Simulations

    PubMed Central

    Davison, Andrew P.; Hines, Michael L.; Muller, Eilif

    2009-01-01

    Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing. PMID:20198154

  20. Trends in programming languages for neuroscience simulations.

    PubMed

    Davison, Andrew P; Hines, Michael L; Muller, Eilif

    2009-01-01

    Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing.

  1. Power estimation on functional level for programmable processors

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Blume, H.; Noll, T. G.

    2004-05-01

    In diesem Beitrag werden verschiedene Ansätze zur Verlustleistungsschätzung von programmierbaren Prozessoren vorgestellt und bezüglich ihrer Übertragbarkeit auf moderne Prozessor-Architekturen wie beispielsweise Very Long Instruction Word (VLIW)-Architekturen bewertet. Besonderes Augenmerk liegt hierbei auf dem Konzept der sogenannten Functional-Level Power Analysis (FLPA). Dieser Ansatz basiert auf der Einteilung der Prozessor-Architektur in funktionale Blöcke wie beispielsweise Processing-Unit, Clock-Netzwerk, interner Speicher und andere. Die Verlustleistungsaufnahme dieser Bl¨ocke wird parameterabhängig durch arithmetische Modellfunktionen beschrieben. Durch automatisierte Analyse von Assemblercodes des zu schätzenden Systems mittels eines Parsers können die Eingangsparameter wie beispielsweise der erzielte Parallelitätsgrad oder die Art des Speicherzugriffs gewonnen werden. Dieser Ansatz wird am Beispiel zweier moderner digitaler Signalprozessoren durch eine Vielzahl von Basis-Algorithmen der digitalen Signalverarbeitung evaluiert. Die ermittelten Schätzwerte für die einzelnen Algorithmen werden dabei mit physikalisch gemessenen Werten verglichen. Es ergibt sich ein sehr kleiner maximaler Schätzfehler von 3%. In this contribution different approaches for power estimation for programmable processors are presented and evaluated concerning their capability to be applied to modern digital signal processor architectures like e.g. Very Long InstructionWord (VLIW) -architectures. Special emphasis will be laid on the concept of so-called Functional-Level Power Analysis (FLPA). This approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory and others. The power consumption of these blocks is described by parameter dependent arithmetic model functions. By application of a parser based automized analysis of assembler codes of the systems to be estimated the input parameters of the Correspondence to: H. Blume (blume@eecs.rwth-aachen.de) arithmetic functions like e.g. the achieved degree of parallelism or the kind and number of memory accesses can be computed. This approach is exemplarily demonstrated and evaluated applying two modern digital signal processors and a variety of basic algorithms of digital signal processing. The resulting estimation values for the inspected algorithms are compared to physically measured values. A resulting maximum estimation error of 3% is achieved.

  2. Microelectromechanical reprogrammable logic device.

    PubMed

    Hafiz, M A A; Kosuru, L; Younis, M I

    2016-03-29

    In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme.

  3. Microelectromechanical reprogrammable logic device

    PubMed Central

    Hafiz, M. A. A.; Kosuru, L.; Younis, M. I.

    2016-01-01

    In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme. PMID:27021295

  4. Single-chip microprocessor that communicates directly using light

    NASA Astrophysics Data System (ADS)

    Sun, Chen; Wade, Mark T.; Lee, Yunsup; Orcutt, Jason S.; Alloatti, Luca; Georgas, Michael S.; Waterman, Andrew S.; Shainline, Jeffrey M.; Avizienis, Rimas R.; Lin, Sen; Moss, Benjamin R.; Kumar, Rajesh; Pavanello, Fabio; Atabaki, Amir H.; Cook, Henry M.; Ou, Albert J.; Leu, Jonathan C.; Chen, Yu-Hsin; Asanović, Krste; Ram, Rajeev J.; Popović, Miloš A.; Stojanović, Vladimir M.

    2015-12-01

    Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems—from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic-photonic systems enabled by silicon-based nanophotonic devices8. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic-photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic-photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a ‘zero-change’ approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic-photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.

  5. Single-chip microprocessor that communicates directly using light.

    PubMed

    Sun, Chen; Wade, Mark T; Lee, Yunsup; Orcutt, Jason S; Alloatti, Luca; Georgas, Michael S; Waterman, Andrew S; Shainline, Jeffrey M; Avizienis, Rimas R; Lin, Sen; Moss, Benjamin R; Kumar, Rajesh; Pavanello, Fabio; Atabaki, Amir H; Cook, Henry M; Ou, Albert J; Leu, Jonathan C; Chen, Yu-Hsin; Asanović, Krste; Ram, Rajeev J; Popović, Miloš A; Stojanović, Vladimir M

    2015-12-24

    Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems--from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic-photonic systems enabled by silicon-based nanophotonic devices. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic-photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic-photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a 'zero-change' approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic-photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.

  6. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  7. Design of cylindrical pipe automatic welding control system based on STM32

    NASA Astrophysics Data System (ADS)

    Chen, Shuaishuai; Shen, Weicong

    2018-04-01

    The development of modern economy makes the demand for pipeline construction and construction rapidly increasing, and the pipeline welding has become an important link in pipeline construction. At present, there are still a large number of using of manual welding methods at home and abroad, and field pipe welding especially lacks miniature and portable automatic welding equipment. An automated welding system consists of a control system, which consisting of a lower computer control panel and a host computer operating interface, as well as automatic welding machine mechanisms and welding power systems in coordination with the control system. In this paper, a new control system of automatic pipe welding based on the control panel of the lower computer and the interface of the host computer is proposed, which has many advantages over the traditional automatic welding machine.

  8. Overview of computational structural methods for modern military aircraft

    NASA Technical Reports Server (NTRS)

    Kudva, J. N.

    1992-01-01

    Computational structural methods are essential for designing modern military aircraft. This briefing deals with computational structural methods (CSM) currently used. First a brief summary of modern day aircraft structural design procedures is presented. Following this, several ongoing CSM related projects at Northrop are discussed. Finally, shortcomings in this area, future requirements, and summary remarks are given.

  9. Five Indisputable Facts on Modern Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloom, Aaron P; Brinkman, Gregory L; Lopez, Anthony J

    This presentation overviews five indisputable facts about modern power systems: Fact one: The grid can handle more renewable generation than previously thought. Fact two: Geographic and resource diversity provide additional reliability to the system. Fact three: Wind and solar forecasting provide significant value. Fact four: Our electric power markets were not originally designed for variable renewables -- but they could be adapted. Fact five: Modern power electronics are creating new sources of essential reliability services.

  10. Geospace simulations using modern accelerator processor technology

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D. J.

    2009-12-01

    OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.

  11. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  13. Parallel Calculations in LS-DYNA

    NASA Astrophysics Data System (ADS)

    Vartanovich Mkrtychev, Oleg; Aleksandrovich Reshetov, Andrey

    2017-11-01

    Nowadays, structural mechanics exhibits a trend towards numeric solutions being found for increasingly extensive and detailed tasks, which requires that capacities of computing systems be enhanced. Such enhancement can be achieved by different means. E.g., in case a computing system is represented by a workstation, its components can be replaced and/or extended (CPU, memory etc.). In essence, such modification eventually entails replacement of the entire workstation, i.e. replacement of certain components necessitates exchange of others (faster CPUs and memory devices require buses with higher throughput etc.). Special consideration must be given to the capabilities of modern video cards. They constitute powerful computing systems capable of running data processing in parallel. Interestingly, the tools originally designed to render high-performance graphics can be applied for solving problems not immediately related to graphics (CUDA, OpenCL, Shaders etc.). However, not all software suites utilize video cards’ capacities. Another way to increase capacity of a computing system is to implement a cluster architecture: to add cluster nodes (workstations) and to increase the network communication speed between the nodes. The advantage of this approach is extensive growth due to which a quite powerful system can be obtained by combining not particularly powerful nodes. Moreover, separate nodes may possess different capacities. This paper considers the use of a clustered computing system for solving problems of structural mechanics with LS-DYNA software. To establish a range of dependencies a mere 2-node cluster has proven sufficient.

  14. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  15. Modern prospects of development of branch of solar power

    NASA Astrophysics Data System (ADS)

    Luchkina, Veronika

    2017-10-01

    Advantages of solar energy for modern companies are evident already. Article describes mechanism of the solar electricity generation. Process of production of solar modules with appliance of the modern technologies of sun energy production. The branch of solar energy “green energy” become advanced in Russia and has a stable demand. Classification of investments on the different stages of construction projects of solar power plants and calculation of their economic efficiency. Studying of introduction of these technologies allows to estimate the modern prospects of development of branch of solar power.

  16. A versatile system for the rapid collection, handling and graphics analysis of multidimensional data

    NASA Astrophysics Data System (ADS)

    O'Brien, P. M.; Moloney, G.; O'Connor, A.; Legge, G. J. F.

    1993-05-01

    The aim of this work was to provide a versatile system for handling multiparameter data that may arise from a variety of experiments — nuclear, AMS, microprobe elemental analysis, 3D microtomography etc. Some of the most demanding requirements arise in the application of microprobes to quantitative elemental mapping and to microtomography. A system to handle data from such experiments had been under continuous development and use at MARC for the past 15 years. It has now been made adaptable to the needs of multiparameter (or single parameter) experiments in general. The original system has been rewritten, greatly expanded and made much more powerful and faster, by use of modern computer technology — a VME bus computer with a real time operating system and a RISC workstation running Unix and the X Window system. This provides the necessary (i) power, speed and versatility, (ii) expansion and updating capabilities (iii) standardisation and adaptability, (iv) coherent modular programming structures, (v) ability to interface to other programs and (vi) transparent operation with several levels, involving the use of menus, programmed function keys and powerful macro programming facilities.

  17. VLSI Implementation of Fault Tolerance Multiplier based on Reversible Logic Gate

    NASA Astrophysics Data System (ADS)

    Ahmad, Nabihah; Hakimi Mokhtar, Ahmad; Othman, Nurmiza binti; Fhong Soon, Chin; Rahman, Ab Al Hadi Ab

    2017-08-01

    Multiplier is one of the essential component in the digital world such as in digital signal processing, microprocessor, quantum computing and widely used in arithmetic unit. Due to the complexity of the multiplier, tendency of errors are very high. This paper aimed to design a 2×2 bit Fault Tolerance Multiplier based on Reversible logic gate with low power consumption and high performance. This design have been implemented using 90nm Complemetary Metal Oxide Semiconductor (CMOS) technology in Synopsys Electronic Design Automation (EDA) Tools. Implementation of the multiplier architecture is by using the reversible logic gates. The fault tolerance multiplier used the combination of three reversible logic gate which are Double Feynman gate (F2G), New Fault Tolerance (NFT) gate and Islam Gate (IG) with the area of 160μm x 420.3μm (67.25 mm2). This design achieved a low power consumption of 122.85μW and propagation delay of 16.99ns. The fault tolerance multiplier proposed achieved a low power consumption and high performance which suitable for application of modern computing as it has a fault tolerance capabilities.

  18. Data Science Priorities for a University Hospital-Based Institute of Infectious Diseases: A Viewpoint.

    PubMed

    Valleron, Alain-Jacques

    2017-08-15

    Automation of laboratory tests, bioinformatic analysis of biological sequences, and professional data management are used routinely in a modern university hospital-based infectious diseases institute. This dates back to at least the 1980s. However, the scientific methods of this 21st century are changing with the increased power and speed of computers, with the "big data" revolution having already happened in genomics and environment, and eventually arriving in medical informatics. The research will be increasingly "data driven," and the powerful machine learning methods whose efficiency is demonstrated in daily life will also revolutionize medical research. A university-based institute of infectious diseases must therefore not only gather excellent computer scientists and statisticians (as in the past, and as in any medical discipline), but also fully integrate the biologists and clinicians with these computer scientists, statisticians, and mathematical modelers having a broad culture in machine learning, knowledge representation, and knowledge discovery. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  19. MIC-SVM: Designing A Highly Efficient Support Vector Machine For Advanced Modern Multi-Core and Many-Core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Song, Shuaiwen; Fu, Haohuan

    2014-08-16

    Support Vector Machine (SVM) has been widely used in data-mining and Big Data applications as modern commercial databases start to attach an increasing importance to the analytic capabilities. In recent years, SVM was adapted to the field of High Performance Computing for power/performance prediction, auto-tuning, and runtime scheduling. However, even at the risk of losing prediction accuracy due to insufficient runtime information, researchers can only afford to apply offline model training to avoid significant runtime training overhead. To address the challenges above, we designed and implemented MICSVM, a highly efficient parallel SVM for x86 based multi-core and many core architectures,more » such as the Intel Ivy Bridge CPUs and Intel Xeon Phi coprocessor (MIC).« less

  20. Computer simulation: A modern day crystal ball?

    NASA Technical Reports Server (NTRS)

    Sham, Michael; Siprelle, Andrew

    1994-01-01

    It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.

  1. Development of a Low Inductance Linear Alternator for Stirling Power Convertors

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schifer, Nicholas A.

    2017-01-01

    The free-piston Stirling power convertor is a promising technology for high efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper, eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations, and compares the predictions with experimental data for one of the configurations that has been built and is currently being tested.

  2. Development of a Low-Inductance Linear Alternator for Stirling Power Convertors

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schifer, Nicholas A.

    2017-01-01

    The free-piston Stirling power convertor is a promising technology for high-efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-the-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations. Additionally, one of the configurations was built and tested at GRC, and the experimental data is compared with the predictions.

  3. The application of LQR synthesis techniques to the turboshaft engine control problem

    NASA Technical Reports Server (NTRS)

    Pfeil, W. H.; De Los Reyes, G.; Bobula, G. A.

    1984-01-01

    A power turbine governor was designed for a recent-technology turboshaft engine coupled to a modern, articulated rotor system using Linear Quadratic Regulator (LQR) and Kalman Filter (KF) techniques. A linear, state-space model of the engine and rotor system was derived for six engine power settings from flight idle to maximum continuous. An integrator was appended to the fuel flow input to reduce the steady-state governor error to zero. Feedback gains were calculated for the system states at each power setting using the LQR technique. The main rotor tip speed state is not measurable, so a Kalman Filter of the rotor was used to estimate this state. The crossover of the system was increased to 10 rad/s compared to 2 rad/sec for a current governor. Initial computer simulations with a nonlinear engine model indicate a significant decrease in power turbine speed variation with the LQR governor compared to a conventional governor.

  4. Digital intelligent booster for DCC miniature train networks

    NASA Astrophysics Data System (ADS)

    Ursu, M. P.; Condruz, D. A.

    2017-08-01

    Modern miniature trains are now driven by means of the DCC (Digital Command and Control) system, which allows the human operator or a personal computer to launch commands to each individual train or even to control different features of the same train. The digital command station encodes these commands and sends them to the trains by means of electrical pulses via the rails of the railway network. Due to the development of the miniature railway network, it may happen that the power requirement of the increasing number of digital locomotives, carriages and accessories exceeds the nominal output power of the digital command station. This digital intelligent booster relieves the digital command station from powering the entire railway network all by itself, and it automatically handles the multiple powered sections of the network. This electronic device is also able to detect and process short-circuits and overload conditions, without the intervention of the digital command station.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. E. Lawson, R. Marsala, S. Ramakrishnan, X. Zhao, P. Sichta

    In order to provide improved and expanded experimental capabilities, the existing Transrex power supplies at PPPL are to be upgraded and modernized. Each of the 39 power supplies consists of two six pulse silicon controlled rectifier sections forming a twelve pulse power supply. The first modification is to split each supply into two independent six pulse supplies by replacing the existing obsolete twelve pulse firing generator with two commercially available six pulse firing generators. The second change replaces the existing control link with a faster system, with greater capacity, which will allow for independent control of all 78 power supplymore » sections. The third change replaces the existing Computer Automated Measurement and Control (CAMAC) based fault detector with an Experimental Physics and Industrial Control System (EPICS) compatible unit, eliminating the obsolete CAMAC modules. Finally the remaining relay logic and interfaces to the "Hardwired Control System" will be replaces with a Programmable Logic Controller (PLC).« less

  6. Fiji: an open-source platform for biological-image analysis.

    PubMed

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  7. Full employment maintenance in the private sector

    NASA Technical Reports Server (NTRS)

    Young, G. A.

    1976-01-01

    Operationally, full employment can be accomplished by applying modern computer capabilities, game and decision concepts, and communication feedback possibilities, rather than accepted economic tools, to the problem of assuring invariant full employment. The government must provide positive direction to individual firms concerning the net number of employees that each firm must hire or refrain from hiring to assure national full employment. To preserve free enterprise and the decision making power of the individual manager, this direction must be based on each private firm's own numerical employment projections.

  8. The computational challenges of Earth-system science.

    PubMed

    O'Neill, Alan; Steenman-Clark, Lois

    2002-06-15

    The Earth system--comprising atmosphere, ocean, land, cryosphere and biosphere--is an immensely complex system, involving processes and interactions on a wide range of space- and time-scales. To understand and predict the evolution of the Earth system is one of the greatest challenges of modern science, with success likely to bring enormous societal benefits. High-performance computing, along with the wealth of new observational data, is revolutionizing our ability to simulate the Earth system with computer models that link the different components of the system together. There are, however, considerable scientific and technical challenges to be overcome. This paper will consider four of them: complexity, spatial resolution, inherent uncertainty and time-scales. Meeting these challenges requires a significant increase in the power of high-performance computers. The benefits of being able to make reliable predictions about the evolution of the Earth system should, on their own, amply repay this investment.

  9. Embedded ubiquitous services on hospital information systems.

    PubMed

    Kuroda, Tomohiro; Sasaki, Hiroshi; Suenaga, Takatoshi; Masuda, Yasushi; Yasumuro, Yoshihiro; Hori, Kenta; Ohboshi, Naoki; Takemura, Tadamasa; Chihara, Kunihiro; Yoshihara, Hiroyuki

    2012-11-01

    A Hospital Information Systems (HIS) have turned a hospital into a gigantic computer with huge computational power, huge storage and wired/wireless local area network. On the other hand, a modern medical device, such as echograph, is a computer system with several functional units connected by an internal network named a bus. Therefore, we can embed such a medical device into the HIS by simply replacing the bus with the local area network. This paper designed and developed two embedded systems, a ubiquitous echograph system and a networked digital camera. Evaluations of the developed systems clearly show that the proposed approach, embedding existing clinical systems into HIS, drastically changes productivity in the clinical field. Once a clinical system becomes a pluggable unit for a gigantic computer system, HIS, the combination of multiple embedded systems with application software designed under deep consideration about clinical processes may lead to the emergence of disruptive innovation in the clinical field.

  10. Initial experience with a handheld device digital imaging and communications in medicine viewer: OsiriX mobile on the iPhone.

    PubMed

    Choudhri, Asim F; Radvany, Martin G

    2011-04-01

    Medical imaging is commonly used to diagnose many emergent conditions, as well as plan treatment. Digital images can be reviewed on almost any computing platform. Modern mobile phones and handheld devices are portable computing platforms with robust software programming interfaces, powerful processors, and high-resolution displays. OsiriX mobile, a new Digital Imaging and Communications in Medicine viewing program, is available for the iPhone/iPod touch platform. This raises the possibility of mobile review of diagnostic medical images to expedite diagnosis and treatment planning using a commercial off the shelf solution, facilitating communication among radiologists and referring clinicians.

  11. Highly parallel implementation of non-adiabatic Ehrenfest molecular dynamics

    NASA Astrophysics Data System (ADS)

    Kanai, Yosuke; Schleife, Andre; Draeger, Erik; Anisimov, Victor; Correa, Alfredo

    2014-03-01

    While the adiabatic Born-Oppenheimer approximation tremendously lowers computational effort, many questions in modern physics, chemistry, and materials science require an explicit description of coupled non-adiabatic electron-ion dynamics. Electronic stopping, i.e. the energy transfer of a fast projectile atom to the electronic system of the target material, is a notorious example. We recently implemented real-time time-dependent density functional theory based on the plane-wave pseudopotential formalism in the Qbox/qb@ll codes. We demonstrate that explicit integration using a fourth-order Runge-Kutta scheme is very suitable for modern highly parallelized supercomputers. Applying the new implementation to systems with hundreds of atoms and thousands of electrons, we achieved excellent performance and scalability on a large number of nodes both on the BlueGene based ``Sequoia'' system at LLNL as well as the Cray architecture of ``Blue Waters'' at NCSA. As an example, we discuss our work on computing the electronic stopping power of aluminum and gold for hydrogen projectiles, showing an excellent agreement with experiment. These first-principles calculations allow us to gain important insight into the the fundamental physics of electronic stopping.

  12. A Modern Take on the RV Classics: N-body Analysis of GJ 876 and 55 Cnc

    NASA Astrophysics Data System (ADS)

    Nelson, Benjamin E.; Ford, E. B.; Wright, J.

    2013-01-01

    Over the past two decades, radial velocity (RV) observations have uncovered a diverse population of exoplanet systems, in particular a subset of multi-planet systems that exhibit strong dynamical interactions. To extract the model parameters (and uncertainties) accurately from these observations, one requires self-consistent n-body integrations and must explore a high-dimensional 7 x number of planets) parameter space, both of which are computationally challenging. Utilizing the power of modern computing resources, we apply our Radial velocity Using N-body Differential Evolution Markov Chain Monte Carlo code (RUN DEMCMC) to two landmark systems from early exoplanet surveys: GJ 876 and 55 Cnc. For GJ 876, we analyze the Keck HIRES (Rivera et al. 2010) and HARPS (Correia et al. 2010) data and constrain the distribution of the Laplace argument. For 55 Cnc, we investigate the orbital architecture based on a cumulative 1086 RV observations from various sources and transit constraints from Winn et al. 2011. In both cases, we also test for long-term orbital stability.

  13. The Development of Sociocultural Competence with the Help of Computer Technology

    ERIC Educational Resources Information Center

    Rakhimova, Alina E.; Yashina, Marianna E.; Mukhamadiarova, Albina F.; Sharipova, Astrid V.

    2017-01-01

    The article deals with the description of the process of development sociocultural knowledge and competences using computer technologies. On the whole the development of modern computer technologies allows teachers to broaden trainees' sociocultural outlook and trace their progress online. Observation of modern computer technologies and estimation…

  14. Modernizing and transforming medical education at the Kilimanjaro Christian Medical University College.

    PubMed

    Lisasi, Esther; Kulanga, Ahaz; Muiruri, Charles; Killewo, Lucy; Fadhili, Ndimangwa; Mimano, Lucy; Kapanda, Gibson; Tibyampansha, Dativa; Ibrahim, Glory; Nyindo, Mramba; Mteta, Kien; Kessi, Egbert; Ntabaye, Moshi; Bartlett, John

    2014-08-01

    The Kilimanjaro Christian Medical University (KCMU) College and the Medical Education Partnership Initiative (MEPI) are addressing the crisis in Tanzanian health care manpower by modernizing the college's medical education with new tools and techniques. With a $10 million MEPI grant and the participation of its partner, Duke University, KCMU is harnessing the power of information technology (IT) to upgrade tools for students and faculty. Initiatives in eLearning have included bringing fiber-optic connectivity to the campus, offering campus-wide wireless access, opening student and faculty computer laboratories, and providing computer tablets to all incoming medical students. Beyond IT, the college is also offering wet laboratory instruction for hands-on diagnostic skills, team-based learning, and clinical skills workshops. In addition, modern teaching tools and techniques address the challenges posed by increasing numbers of students. To provide incentives for instructors, a performance-based compensation plan and teaching awards have been established. Also for faculty, IT tools and training have been made available, and a medical education course management system is now being widely employed. Student and faculty responses have been favorable, and the rapid uptake of these interventions by students, faculty, and the college's administration suggests that the KCMU College MEPI approach has addressed unmet needs. This enabling environment has transformed the culture of learning and teaching at KCMU College, where a path to sustainability is now being pursued.

  15. Modernizing and Transforming Medical Education at the Kilimanjaro Christian Medical University College

    PubMed Central

    Lisasi, Esther; Kulanga, Ahaz; Muiruri, Charles; Killewo, Lucy; Fadhili, Ndimangwa; Mimano, Lucy; Kapanda, Gibson; Tibyampansha, Dativa; Ibrahim, Glory; Nyindo, Mramba; Mteta, Kien; Kessi, Egbert; Ntabaye, Moshi; Bartlett, John

    2014-01-01

    The Kilimanjaro Christian Medical University (KCMU) College and the Medical Education Partnership Initiative (MEPI) are addressing the crisis in Tanzanian health care manpower by modernizing the college’s medical education with new tools and techniques. With a $10 million MEPI grant and the participation of its partner, Duke University, KCMU is harnessing the power of information technology (IT) to upgrade tools for students and faculty. Initiatives in eLearning have included bringing fiber-optic connectivity to the campus, offering campus-wide wireless access, opening student and faculty computer laboratories, and providing computer tablets to all incoming medical students. Beyond IT, the college is also offering wet laboratory instruction for hands-on diagnostic skills, team-based learning, and clinical skills workshops. In addition, modern teaching tools and techniques address the challenges posed by increasing numbers of students. To provide incentives for instructors, a performance-based compensation plan and teaching awards have been established. Also for faculty, IT tools and training have been made available, and a medical education course management system is now being widely employed. Student and faculty responses have been favorable, and the rapid uptake of these interventions by students, faculty, and the college’s administration suggests that the KCMU College MEPI approach has addressed unmet needs. This enabling environment has transformed the culture of learning and teaching at KCMU College, where a path to sustainability is now being pursued. PMID:25072581

  16. Composite materials molding simulation for purpose of automotive industry

    NASA Astrophysics Data System (ADS)

    Grabowski, Ł.; Baier, A.; Majzner, M.; Sobek, M.

    2016-08-01

    Composite materials loom large increasingly important role in the overall industry. Composite material have a special role in the ever-evolving automotive industry. Every year the composite materials are used in a growing number of elements included in the cars construction. Development requires the search for ever new applications of composite materials in areas where previously were used only metal materials. Requirements for modern solutions, such as reducing the weight of vehicles, the required strength and vibration damping characteristics go hand in hand with the properties of modern composite materials. The designers faced the challenge of the use of modern composite materials in the construction of bodies of power steering systems in vehicles. The initial choice of method for producing composite bodies was the method of molding injection of composite material. Molding injection of polymeric materials is a widely known and used for many years, but the molding injection of composite materials is a relatively new issue, innovative, it is not very common and is characterized by different conditions, parameters and properties in relation to the classical method. Therefore, for the purpose of selecting the appropriate composite material for injection for the body of power steering system computer analysis using Siemens NX 10.0 environment, including Moldex 3d and EasyFill Advanced tool to simulate the injection of materials from the group of possible solutions were carried out. Analyses were carried out on a model of a modernized wheel case of power steering system. During analysis, input parameters, such as temperature, pressure injectors, temperature charts have been analysed. An important part of the analysis was to analyse the propagation of material inside the mold during injection, so that allowed to determine the shape formability and the existence of possible imperfections of shapes and locations air traps. A very important parameter received from computer analysis was to determine the occurrence of the shrinkage of the material, which significantly affects the behaviour of the assumed geometry of the tested component. It also allowed the prediction of existence of shrincage of material during the process of modelling the shape of body. The next step was to analyse the numerical analysis results received from Siemens NX 10 and Moldex 3D EasyFlow Advanced environment. The process of injection were subjected to shape of prototype body of power steering. The material used in process of injection was similar to one of excepted material to be used in process of molding. Nextly, the results were analysed in purpose of geometry, where samples has aberrations in comparison to a given shape of mold. The samples were also analysed in terms of shrinkage. Research and results were described in detail in this paper.

  17. Articles on Practical Cybernetics. Computer-Developed Computers; Heuristics and Modern Sciences; Linguistics and Practice; Cybernetics and Moral-Ethical Considerations; and Men and Machines at the Chessboard.

    ERIC Educational Resources Information Center

    Berg, A. I.; And Others

    Five articles which were selected from a Russian language book on cybernetics and then translated are presented here. They deal with the topics of: computer-developed computers, heuristics and modern sciences, linguistics and practice, cybernetics and moral-ethical considerations, and computer chess programs. (Author/JY)

  18. Raster-Based Approach to Solar Pressure Modeling

    NASA Technical Reports Server (NTRS)

    Wright, Theodore W. II

    2013-01-01

    An algorithm has been developed to take advantage of the graphics processing hardware in modern computers to efficiently compute high-fidelity solar pressure forces and torques on spacecraft, taking into account the possibility of self-shading due to the articulation of spacecraft components such as solar arrays. The process is easily extended to compute other results that depend on three-dimensional attitude analysis, such as solar array power generation or free molecular flow drag. The impact of photons upon a spacecraft introduces small forces and moments. The magnitude and direction of the forces depend on the material properties of the spacecraft components being illuminated. The parts of the components being lit depends on the orientation of the craft with respect to the Sun, as well as the gimbal angles for any significant moving external parts (solar arrays, typically). Some components may shield others from the Sun. The purpose of this innovation is to enable high-fidelity computation of solar pressure and power generation effects of illuminated portions of spacecraft, taking self-shading from spacecraft attitude and movable components into account. The key idea in this innovation is to compute results dependent upon complicated geometry by using an image to break the problem into thousands or millions of sub-problems with simple geometry, and then the results from the simpler problems are combined to give high-fidelity results for the full geometry. This process is performed by constructing a 3D model of a spacecraft using an appropriate computer language (OpenGL), and running that model on a modern computer's 3D accelerated video processor. This quickly and accurately generates a view of the model (as shown on a computer screen) that takes rotation and articulation of spacecraft components into account. When this view is interpreted as the spacecraft as seen by the Sun, then only the portions of the craft visible in the view are illuminated. The view as shown on the computer screen is composed of up to millions of pixels. Each of those pixels is associated with a small illuminated area of the spacecraft. For each pixel, it is possible to compute its position, angle (surface normal) from the view direction, and the spacecraft material (and therefore, optical coefficients) associated with that area. With this information, the area associated with each pixel can be modeled as a simple flat plate for calculating solar pressure. The vector sum of these individual flat plate models is a high-fidelity approximation of the solar pressure forces and torques on the whole vehicle. In addition to using optical coefficients associated with each spacecraft material to calculate solar pressure, a power generation coefficient is added for computing solar array power generation from the sum of the illuminated areas. Similarly, other area-based calculations, such as free molecular flow drag, are also enabled. Because the model rendering is separated from other calculations, it is relatively easy to add a new model to explore a new vehicle or mission configuration. Adding a new model is performed by adding OpenGL code, but a future version might read a mesh file exported from a computer-aided design (CAD) system to enable very rapid turnaround for new designs

  19. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  20. AHaH computing-from metastable switches to attractors to machine learning.

    PubMed

    Nugent, Michael Alexander; Molter, Timothy Wesley

    2014-01-01

    Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.

  1. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.

    PubMed

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-02-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.

  2. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid

    PubMed Central

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-01-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654

  3. Rapid algorithm prototyping and implementation for power quality measurement

    NASA Astrophysics Data System (ADS)

    Kołek, Krzysztof; Piątek, Krzysztof

    2015-12-01

    This article presents a Model-Based Design (MBD) approach to rapidly implement power quality (PQ) metering algorithms. Power supply quality is a very important aspect of modern power systems and will become even more important in future smart grids. In this case, maintaining the PQ parameters at the desired level will require efficient implementation methods of the metering algorithms. Currently, the development of new, advanced PQ metering algorithms requires new hardware with adequate computational capability and time intensive, cost-ineffective manual implementations. An alternative, considered here, is an MBD approach. The MBD approach focuses on the modelling and validation of the model by simulation, which is well-supported by a Computer-Aided Engineering (CAE) packages. This paper presents two algorithms utilized in modern PQ meters: a phase-locked loop based on an Enhanced Phase Locked Loop (EPLL), and the flicker measurement according to the IEC 61000-4-15 standard. The algorithms were chosen because of their complexity and non-trivial development. They were first modelled in the MATLAB/Simulink package, then tested and validated in a simulation environment. The models, in the form of Simulink diagrams, were next used to automatically generate C code. The code was compiled and executed in real-time on the Zynq Xilinx platform that combines a reconfigurable Field Programmable Gate Array (FPGA) with a dual-core processor. The MBD development of PQ algorithms, automatic code generation, and compilation form a rapid algorithm prototyping and implementation path for PQ measurements. The main advantage of this approach is the ability to focus on the design, validation, and testing stages while skipping over implementation issues. The code generation process renders production-ready code that can be easily used on the target hardware. This is especially important when standards for PQ measurement are in constant development, and the PQ issues in emerging smart grids will require tools for rapid development and implementation of such algorithms.

  4. Fast neural net simulation with a DSP processor array.

    PubMed

    Muller, U A; Gunzinger, A; Guggenbuhl, W

    1995-01-01

    This paper describes the implementation of a fast neural net simulator on a novel parallel distributed-memory computer. A 60-processor system, named MUSIC (multiprocessor system with intelligent communication), is operational and runs the backpropagation algorithm at a speed of 330 million connection updates per second (continuous weight update) using 32-b floating-point precision. This is equal to 1.4 Gflops sustained performance. The complete system with 3.8 Gflops peak performance consumes less than 800 W of electrical power and fits into a 19-in rack. While reaching the speed of modern supercomputers, MUSIC still can be used as a personal desktop computer at a researcher's own disposal. In neural net simulation, this gives a computing performance to a single user which was unthinkable before. The system's real-time interfaces make it especially useful for embedded applications.

  5. Determination of appropriate DC voltage for switched mode power supply (SMPS) loads

    NASA Astrophysics Data System (ADS)

    Setiawan, Eko Adhi; Setiawan, Aiman; Purnomo, Andri; Djamal, Muchlishah Hadi

    2017-03-01

    Nowadays, most of modern and efficient household electronic devices operated based on Switched Mode Power Supply (SMPS) technology which convert AC voltage from the grid to DC voltage. Based on theory and experiment, SMPS loads could be supplied by DC voltage. However, the DC voltage rating to energize electronic home appliances is not standardized yet. This paper proposed certain method to determine appropriate DC voltage, and investigated comparison of SMPS power consumption which is supplied from AC and DC voltage. To determine the appropriate DC voltage, lux value of several lamps which have same specification energized by using AC voltage and the results is using as reference. Then, the lamps were supplied by various DC voltage to obtain the trends of the lux value to the applied DC voltage. After that, by using the trends and the reference lux value, the appropriate DC voltage can be determined. Furthermore, the power consumption on home appliances such as mobile phone, laptop and personal computer by using AC voltage and the appropriate DC voltage were conducted. The results show that the total power consumption of AC system is higher than DC system. The total power (apparent power) consumed by the lamp, mobile phone and personal computer which operated in 220 VAC were 6.93 VA, 34.31 VA and 105.85 VA respectively. On the other hand, under 277 VDC the load consumption were 5.83 W, 19.11 W and 74.46 W respectively.

  6. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    NASA Astrophysics Data System (ADS)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.

    2017-02-01

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.

  7. Computational Chemistry Using Modern Electronic Structure Methods

    ERIC Educational Resources Information Center

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  8. Knowledge-based computer systems for radiotherapy planning.

    PubMed

    Kalet, I J; Paluszynski, W

    1990-08-01

    Radiation therapy is one of the first areas of clinical medicine to utilize computers in support of routine clinical decision making. The role of the computer has evolved from simple dose calculations to elaborate interactive graphic three-dimensional simulations. These simulations can combine external irradiation from megavoltage photons, electrons, and particle beams with interstitial and intracavitary sources. With the flexibility and power of modern radiotherapy equipment and the ability of computer programs that simulate anything the machinery can do, we now face a challenge to utilize this capability to design more effective radiation treatments. How can we manage the increased complexity of sophisticated treatment planning? A promising approach will be to use artificial intelligence techniques to systematize our present knowledge about design of treatment plans, and to provide a framework for developing new treatment strategies. Far from replacing the physician, physicist, or dosimetrist, artificial intelligence-based software tools can assist the treatment planning team in producing more powerful and effective treatment plans. Research in progress using knowledge-based (AI) programming in treatment planning already has indicated the usefulness of such concepts as rule-based reasoning, hierarchical organization of knowledge, and reasoning from prototypes. Problems to be solved include how to handle continuously varying parameters and how to evaluate plans in order to direct improvements.

  9. Computer-assisted learning in critical care: from ENIAC to HAL.

    PubMed

    Tegtmeyer, K; Ibsen, L; Goldstein, B

    2001-08-01

    Computers are commonly used to serve many functions in today's modern intensive care unit. One of the most intriguing and perhaps most challenging applications of computers has been to attempt to improve medical education. With the introduction of the first computer, medical educators began looking for ways to incorporate their use into the modern curriculum. Prior limitations of cost and complexity of computers have consistently decreased since their introduction, making it increasingly feasible to incorporate computers into medical education. Simultaneously, the capabilities and capacities of computers have increased. Combining the computer with other modern digital technology has allowed the development of more intricate and realistic educational tools. The purpose of this article is to briefly describe the history and use of computers in medical education with special reference to critical care medicine. In addition, we will examine the role of computers in teaching and learning and discuss the types of interaction between the computer user and the computer.

  10. Dimensioning appropriate technical and economic parameters of elements in urban distribution power nets based on discrete fast marching method

    NASA Astrophysics Data System (ADS)

    Afanasyev, A. P.; Bazhenov, R. I.; Luchaninov, D. V.

    2018-05-01

    The main purpose of the research is to develop techniques for defining the best technical and economic trajectories of cables in urban power systems. The proposed algorithms of calculation of the routes for laying cables take into consideration topological, technical and economic features of the cabling. The discrete option of an algorithm Fast marching method is applied as a calculating tool. It has certain advantages compared to other approaches. In particular, this algorithm is cost-effective to compute, therefore, it is not iterative. Trajectories of received laying cables are considered as optimal ones from the point of view of technical and economic criteria. They correspond to the present rules of modern urban development.

  11. Sensor Systems Based on FPGAs and Their Applications: A Survey

    PubMed Central

    de la Piedra, Antonio; Braeken, An; Touhafi, Abdellah

    2012-01-01

    In this manuscript, we present a survey of designs and implementations of research sensor nodes that rely on FPGAs, either based upon standalone platforms or as a combination of microcontroller and FPGA. Several current challenges in sensor networks are distinguished and linked to the features of modern FPGAs. As it turns out, low-power optimized FPGAs are able to enhance the computation of several types of algorithms in terms of speed and power consumption in comparison to microcontrollers of commercial sensor nodes. We show that architectures based on the combination of microcontrollers and FPGA can play a key role in the future of sensor networks, in fields where processing capabilities such as strong cryptography, self-testing and data compression, among others, are paramount.

  12. Interagency Workshop on Lighter than Air Vehicles, Monterey, Calif., September 9-13, 1974, Proceedings

    NASA Technical Reports Server (NTRS)

    Vittek, J. F., Jr.

    1975-01-01

    Papers are presented which review modern lighter-than-air (LTA) airship design concepts and LTA structures and materials technology, as well as perform economic and market analyses for assessment of the viability of future LTA development programs. Potential applications of LTA vehicles are examined. Some of the topics covered include preliminary estimates of operating costs for LTA transports, an economic comparison of three heavy lift airborn systems, boundary layer control for airships, computer aided flexible envelope designs, state-of-the-art of metalclad airships, aspects of hybrid-Zeppelins, the LTA vehicle as a total cargo system, unmanned powered balloons, and a practical concept for powered or tethered weight-lifting LTA vehicles. Individual items are announced in this issue.

  13. ChRIS--A web-based neuroimaging and informatics system for collecting, organizing, processing, visualizing and sharing of medical data.

    PubMed

    Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen

    2015-01-01

    The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.

  14. Periodic control of the individual-blade-control helicopter rotor

    NASA Technical Reports Server (NTRS)

    Mckillip, R. M., Jr.

    1985-01-01

    This paper describes the results of an investigation into methods of controller design for linear periodic systems utilizing an extension of modern control methods. Trends present in the selection of various cost functions are outlined, and closed-loop controller results are demonstrated for two cases: first, on an analog computer simulation of the rigid out of plane flapping dynamics of a single rotor blade, and second, on a 4 ft diameter single-bladed model helicopter rotor in the MIT 5 x 7 subsonic wind tunnel, both for various high levels of advance ratio. It is shown that modal control using the IBC concept is possible over a large range of advance ratios with only a modest amount of computational power required.

  15. Heterogeneous computing architecture for fast detection of SNP-SNP interactions.

    PubMed

    Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros

    2014-06-25

    The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.

  16. Heterogeneous computing architecture for fast detection of SNP-SNP interactions

    PubMed Central

    2014-01-01

    Background The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. Results We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. Conclusions General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems. PMID:24964802

  17. Numerical Research of Nitrogen Oxides Formation for Justification of Modernization of P-49 Nazarovsky State District Power Plant Boiler on the Low-temperature Swirl Technology of Burning

    NASA Astrophysics Data System (ADS)

    Trinchenko, A. A.; Paramonov, A. P.; Skouditskiy, V. E.; Anoshin, R. G.

    2017-11-01

    Compliance with increasingly stringent normative requirements to the level of pollutants emissions when using organic fuel in the energy sector as a main source of heat, demands constant improvement of the boiler and furnace equipment and the power equipment in general. The requirements of the current legislation in the field of environmental protection prescribe compliance with established emission standards for both new construction and the improvement of energy equipment. The paper presents the results of numerical research of low-temperature swirl burning in P-49 Nazarovsky state district power plant boiler. On the basis of modern approaches of the diffusion and kinetic theory of burning and the analysis physical and chemical processes of a fuel chemically connected energy transition in thermal, generation and transformation of gas pollutants, the technological method of nitrogen oxides decomposition on the surface of carbon particles with the formation of environmentally friendly carbonic acid and molecular nitrogen is considered during the work of low-temperature swirl furnace. With the use of the developed model, methodology and computer program, variant calculations of the combustion process were carried out and a quantitative estimate of the emission level of the nitrogen oxides of the boiler being modernized. The simulation results the and the experimental data obtained during the commissioning and balance tests of the P-49 boiler with a new furnace are confirmed that the organization of swirl combustion has allowed to increase the efficiency of work, to reduce slagging, to significantly reduce nitrogen oxide emissions, to improve ignition and burnout of fuel.

  18. AHaH Computing–From Metastable Switches to Attractors to Machine Learning

    PubMed Central

    Nugent, Michael Alexander; Molter, Timothy Wesley

    2014-01-01

    Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. PMID:24520315

  19. Achieving energy efficiency during collective communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundriyal, Vaibhav; Sosonkina, Masha; Zhang, Zhao

    2012-09-13

    Energy consumption has become a major design constraint in modern computing systems. With the advent of petaflops architectures, power-efficient software stacks have become imperative for scalability. Techniques such as dynamic voltage and frequency scaling (called DVFS) and CPU clock modulation (called throttling) are often used to reduce the power consumption of the compute nodes. To avoid significant performance losses, these techniques should be used judiciously during parallel application execution. For example, its communication phases may be good candidates to apply the DVFS and CPU throttling without incurring a considerable performance loss. They are often considered as indivisible operations although littlemore » attention is being devoted to the energy saving potential of their algorithmic steps. In this work, two important collective communication operations, all-to-all and allgather, are investigated as to their augmentation with energy saving strategies on the per-call basis. The experiments prove the viability of such a fine-grain approach. They also validate a theoretical power consumption estimate for multicore nodes proposed here. While keeping the performance loss low, the obtained energy savings were always significantly higher than those achieved when DVFS or throttling were switched on across the entire application run« less

  20. Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernández, José Jesús

    2012-01-01

    Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768

  1. Programmable computing with a single magnetoresistive element

    NASA Astrophysics Data System (ADS)

    Ney, A.; Pampuch, C.; Koch, R.; Ploog, K. H.

    2003-10-01

    The development of transistor-based integrated circuits for modern computing is a story of great success. However, the proved concept for enhancing computational power by continuous miniaturization is approaching its fundamental limits. Alternative approaches consider logic elements that are reconfigurable at run-time to overcome the rigid architecture of the present hardware systems. Implementation of parallel algorithms on such `chameleon' processors has the potential to yield a dramatic increase of computational speed, competitive with that of supercomputers. Owing to their functional flexibility, `chameleon' processors can be readily optimized with respect to any computer application. In conventional microprocessors, information must be transferred to a memory to prevent it from getting lost, because electrically processed information is volatile. Therefore the computational performance can be improved if the logic gate is additionally capable of storing the output. Here we describe a simple hardware concept for a programmable logic element that is based on a single magnetic random access memory (MRAM) cell. It combines the inherent advantage of a non-volatile output with flexible functionality which can be selected at run-time to operate as an AND, OR, NAND or NOR gate.

  2. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE PAGES

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...

    2017-03-20

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  3. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  4. A performance model for GPUs with caches

    DOE PAGES

    Dao, Thanh Tuan; Kim, Jungwon; Seo, Sangmin; ...

    2014-06-24

    To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on anmore » analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less

  5. Power-plant modernization program in Latvia. Desk Study Report No. 1. Export trade information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-08-01

    The Government of Latvia has requested the U.S. Trade and Development Program's (TDP's) assistance in financing the cost of a feasibility study to develop a modernization program for its thermal power stations aimed at improving their performance and efficiency. The consultant will work with engineers and managers of Latvenergo, Latvia's power utility, to review the performance of the country's two thermal power stations and carry out a detailed study for the rehabilitation and modernization of the TEC-2 thermal power station in Riga. The overall goal of the program will be to maximize the output capacity of the country's two powermore » stations through the implementation of economically efficient rehabilitation projects.« less

  6. Student Research in Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Blondin, J. M.

    1999-12-01

    Computational physics can shorten the long road from freshman physics major to independent research by providing students with powerful tools to deal with the complexities of modern research problems. At North Carolina State University we have introduced dozens of students to astrophysics research using the tools of computational fluid dynamics. We have used several formats for working with students, including the traditional approach of one-on-one mentoring, a more group-oriented format in which several students work together on one or more related projects, and a novel attempt to involve an entire class in a coordinated semester research project. The advantages and disadvantages of these formats will be discussed at length, but the single most important influence has been peer support. Having students work in teams or learn the tools of research together but tackle different problems has led to more positive experiences than a lone student diving into solo research. This work is supported by an NSF CAREER Award.

  7. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    DOE PAGES

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; ...

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomicsmore » system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.« less

  8. A Bitslice Implementation of Anderson's Attack on A5/1

    NASA Astrophysics Data System (ADS)

    Bulavintsev, Vadim; Semenov, Alexander; Zaikin, Oleg; Kochemazov, Stepan

    2018-03-01

    The A5/1 keystream generator is a part of Global System for Mobile Communications (GSM) protocol, employed in cellular networks all over the world. Its cryptographic resistance was extensively analyzed in dozens of papers. However, almost all corresponding methods either employ a specific hardware or require an extensive preprocessing stage and significant amounts of memory. In the present study, a bitslice variant of Anderson's Attack on A5/1 is implemented. It requires very little computer memory and no preprocessing. Moreover, the attack can be made even more efficient by harnessing the computing power of modern Graphics Processing Units (GPUs). As a result, using commonly available GPUs this method can quite efficiently recover the secret key using only 64 bits of keystream. To test the performance of the implementation, a volunteer computing project was launched. 10 instances of A5/1 cryptanalysis have been successfully solved in this project in a single week.

  9. Mechanistic Insights and Computational Design of Transition-Metal Catalysts for Hydrogenation and Dehydrogenation Reactions.

    PubMed

    Chen, Xiangyang; Yang, Xinzheng

    2016-10-01

    Catalytic hydrogenation and dehydrogenation reactions are fundamentally important in chemical synthesis and industrial processes, as well as potential applications in the storage and conversion of renewable energy. Modern computational quantum chemistry has already become a powerful tool in understanding the structures and properties of compounds and elucidating mechanistic insights of chemical reactions, and therefore, holds great promise in the design of new catalysts. Herein, we review our computational studies on the catalytic hydrogenation of carbon dioxide and small organic carbonyl compounds, and on the dehydrogenation of amine-borane and alcohols with an emphasis on elucidating reaction mechanisms and predicting new catalytic reactions, and in return provide some general ideas for the design of high-efficiency, low-cost transition-metal complexes for hydrogenation and dehydrogenation reactions. © 2016 The Chemical Society of Japan & Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    PubMed Central

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2014-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon 's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research. PMID:26925205

  11. GPU accelerated implementation of NCI calculations using promolecular density.

    PubMed

    Rubez, Gaëtan; Etancelin, Jean-Matthieu; Vigouroux, Xavier; Krajecki, Michael; Boisson, Jean-Charles; Hénon, Eric

    2017-05-30

    The NCI approach is a modern tool to reveal chemical noncovalent interactions. It is particularly attractive to describe ligand-protein binding. A custom implementation for NCI using promolecular density is presented. It is designed to leverage the computational power of NVIDIA graphics processing unit (GPU) accelerators through the CUDA programming model. The code performances of three versions are examined on a test set of 144 systems. NCI calculations are particularly well suited to the GPU architecture, which reduces drastically the computational time. On a single compute node, the dual-GPU version leads to a 39-fold improvement for the biggest instance compared to the optimal OpenMP parallel run (C code, icc compiler) with 16 CPU cores. Energy consumption measurements carried out on both CPU and GPU NCI tests show that the GPU approach provides substantial energy savings. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Operator Support System Design forthe Operation of RSG-GAS Research Reactor

    NASA Astrophysics Data System (ADS)

    Santoso, S.; Situmorang, J.; Bakhri, S.; Subekti, M.; Sunaryo, G. R.

    2018-02-01

    The components of RSG-GAS main control room are facing the problem of material ageing and technology obsolescence as well, and therefore the need for modernization and refurbishment are essential. The modernization in control room can be applied on the operator support system which bears the function in providing information for assisting the operator in conducting diagnosis and actions. The research purpose is to design an operator support system for RSG-GAS control room. The design was developed based on the operator requirement in conducting task operation scenarios and the reactor operation characteristics. These scenarios include power operation, low power operation and shutdown/scram reactor. The operator support system design is presented in a single computer display which contains structure and support system elements e.g. operation procedure, status of safety related components and operational requirements, operation limit condition of parameters, alarm information, and prognosis function. The prototype was developed using LabView software and consisted of components structure and features of the operator support system. Information of each component in the operator support system need to be completed before it can be applied and integrated in the RSG-GAS main control room.

  13. Improvement of Steam Turbine Operational Performance and Reliability with using Modern Information Technologies

    NASA Astrophysics Data System (ADS)

    Brezgin, V. I.; Brodov, Yu M.; Kultishev, A. Yu

    2017-11-01

    The report presents improvement methods review in the fields of the steam turbine units design and operation based on modern information technologies application. In accordance with the life cycle methodology support, a conceptual model of the information support system during life cycle main stages (LC) of steam turbine unit is suggested. A classifying system, which ensures the creation of sustainable information links between the engineer team (manufacture’s plant) and customer organizations (power plants), is proposed. Within report, the principle of parameterization expansion beyond the geometric constructions at the design and improvement process of steam turbine unit equipment is proposed, studied and justified. The report presents the steam turbine unit equipment design methodology based on the brand new oil-cooler design system that have been developed and implemented by authors. This design system combines the construction subsystem, which is characterized by extensive usage of family tables and templates, and computation subsystem, which includes a methodology for the thermal-hydraulic zone-by-zone oil coolers design calculations. The report presents data about the developed software for operational monitoring, assessment of equipment parameters features as well as its implementation on five power plants.

  14. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  15. The application of LQR synthesis techniques to the turboshaft engine control problem. [Linear Quadratic Regulator

    NASA Technical Reports Server (NTRS)

    Pfeil, W. H.; De Los Reyes, G.; Bobula, G. A.

    1985-01-01

    A power turbine governor was designed for a recent-technology turboshaft engine coupled to a modern, articulated rotor system using Linear Quadratic Regulator (LQR) and Kalman Filter (KF) techniques. A linear, state-space model of the engine and rotor system was derived for six engine power settings from flight idle to maximum continuous. An integrator was appended to the fuel flow input to reduce the steady-state governor error to zero. Feedback gains were calculated for the system states at each power setting using the LQR technique. The main rotor tip speed state is not measurable, so a Kalman Filter of the rotor was used to estimate this state. The crossover of the system was increased to 10 rad/s compared to 2 rad/sec for a current governor. Initial computer simulations with a nonlinear engine model indicate a significant decrease in power turbine speed variation with the LQR governor compared to a conventional governor.

  16. A brief historical introduction to Euler's formula for polyhedra, topology, graph theory and networks

    NASA Astrophysics Data System (ADS)

    Debnath, Lokenath

    2010-09-01

    This article is essentially devoted to a brief historical introduction to Euler's formula for polyhedra, topology, theory of graphs and networks with many examples from the real-world. Celebrated Königsberg seven-bridge problem and some of the basic properties of graphs and networks for some understanding of the macroscopic behaviour of real physical systems are included. We also mention some important and modern applications of graph theory or network problems from transportation to telecommunications. Graphs or networks are effectively used as powerful tools in industrial, electrical and civil engineering, communication networks in the planning of business and industry. Graph theory and combinatorics can be used to understand the changes that occur in many large and complex scientific, technical and medical systems. With the advent of fast large computers and the ubiquitous Internet consisting of a very large network of computers, large-scale complex optimization problems can be modelled in terms of graphs or networks and then solved by algorithms available in graph theory. Many large and more complex combinatorial problems dealing with the possible arrangements of situations of various kinds, and computing the number and properties of such arrangements can be formulated in terms of networks. The Knight's tour problem, Hamilton's tour problem, problem of magic squares, the Euler Graeco-Latin squares problem and their modern developments in the twentieth century are also included.

  17. Heliophysics Legacy Data Restoration

    NASA Astrophysics Data System (ADS)

    Candey, R. M.; Bell, E. V., II; Bilitza, D.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Grayzeck, E. J.; Harris, B. T.; Hills, H. K.; Johnson, R. C.; Kovalick, T. J.; Lal, N.; Leckner, H. A.; Liu, M. H.; McCaslin, P. W.; McGuire, R. E.; Papitashvili, N. E.; Rhodes, S. A.; Roberts, D. A.; Yurow, R. E.

    2016-12-01

    The Space Physics Data Facility (SPDF) , in collaboration with the National Space Science Data Coordinated Archive (NSSDCA), is converting datasets from older NASA missions to online storage. Valuable science is still buried within these datasets, particularly by applying modern algorithms on computers with vastly more storage and processing power than available when originally measured, and when analyzed in conjunction with other data and models. The data were also not readily accessible as archived on 7- and 9-track tapes, microfilm and microfiche and other media. Although many datasets have now been moved online in formats that are readily analyzed, others will still require some deciphering to puzzle out the data values and scientific meaning. There is an ongoing effort to convert the datasets to a modern Common Data Format (CDF) and add metadata for use in browse and analysis tools such as CDAWeb .

  18. A new model to compute the desired steering torque for steer-by-wire vehicles and driving simulators

    NASA Astrophysics Data System (ADS)

    Fankem, Steve; Müller, Steffen

    2014-05-01

    This paper deals with the control of the hand wheel actuator in steer-by-wire (SbW) vehicles and driving simulators (DSs). A novel model for the computation of the desired steering torque is presented. The introduced steering torque computation does not only aim to generate a realistic steering feel, which means that the driver should not miss the basic steering functionality of a modern conventional steering system such as an electric power steering (EPS) or hydraulic power steering (HPS), and this in every driving situation. In addition, the modular structure of the steering torque computation combined with suitably selected tuning parameters has the objective to offer a high degree of customisability of the steering feel and thus to provide each driver with his preferred steering feel in a very intuitive manner. The task and the tuning of each module are firstly described. Then, the steering torque computation is parameterised such that the steering feel of a series EPS system is reproduced. For this purpose, experiments are conducted in a hardware-in-the-loop environment where a test EPS is mounted on a steering test bench coupled with a vehicle simulator and parameter identification techniques are applied. Subsequently, how appropriate the steering torque computation mimics the test EPS system is objectively evaluated with respect to criteria concerning the steering torque level and gradient, the feedback behaviour and the steering return ability. Finally, the intuitive tuning of the modular steering torque computation is demonstrated for deriving a sportier steering feel configuration.

  19. Computers and neurosurgery.

    PubMed

    Shaikhouni, Ammar; Elder, J Bradley

    2012-11-01

    At the turn of the twentieth century, the only computational device used in neurosurgical procedures was the brain of the surgeon. Today, most neurosurgical procedures rely at least in part on the use of a computer to help perform surgeries accurately and safely. The techniques that revolutionized neurosurgery were mostly developed after the 1950s. Just before that era, the transistor was invented in the late 1940s, and the integrated circuit was invented in the late 1950s. During this time, the first automated, programmable computational machines were introduced. The rapid progress in the field of neurosurgery not only occurred hand in hand with the development of modern computers, but one also can state that modern neurosurgery would not exist without computers. The focus of this article is the impact modern computers have had on the practice of neurosurgery. Neuroimaging, neuronavigation, and neuromodulation are examples of tools in the armamentarium of the modern neurosurgeon that owe each step in their evolution to progress made in computer technology. Advances in computer technology central to innovations in these fields are highlighted, with particular attention to neuroimaging. Developments over the last 10 years in areas of sensors and robotics that promise to transform the practice of neurosurgery further are discussed. Potential impacts of advances in computers related to neurosurgery in developing countries and underserved regions are also discussed. As this article illustrates, the computer, with its underlying and related technologies, is central to advances in neurosurgery over the last half century. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. A Format for Phylogenetic Placements

    PubMed Central

    Matsen, Frederick A.; Hoffman, Noah G.; Gallagher, Aaron; Stamatakis, Alexandros

    2012-01-01

    We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement. PMID:22383988

  1. A format for phylogenetic placements.

    PubMed

    Matsen, Frederick A; Hoffman, Noah G; Gallagher, Aaron; Stamatakis, Alexandros

    2012-01-01

    We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement.

  2. Techniques of EMG signal analysis: detection, processing, classification and applications

    PubMed Central

    Hussain, M.S.; Mohd-Yasin, F.

    2006-01-01

    Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. PMID:16799694

  3. Resting-state EEG study of comatose patients: a connectivity and frequency analysis to find differences between vegetative and minimally conscious states

    PubMed Central

    Lehembre, Rémy; Bruno, Marie-Aurélie; Vanhaudenhuyse, Audrey; Chatelle, Camille; Cologan, Victor; Leclercq, Yves; Soddu, Andrea; Macq, Benoît; Laureys, Steven; Noirhomme, Quentin

    2012-01-01

    Summary The aim of this study was to look for differences in the power spectra and in EEG connectivity measures between patients in the vegetative state (VS/UWS) and patients in the minimally conscious state (MCS). The EEG of 31 patients was recorded and analyzed. Power spectra were obtained using modern multitaper methods. Three connectivity measures (coherence, the imaginary part of coherency and the phase lag index) were computed. Of the 31 patients, 21 were diagnosed as MCS and 10 as VS/UWS using the Coma Recovery Scale-Revised (CRS-R). EEG power spectra revealed differences between the two conditions. The VS/UWS patients showed increased delta power but decreased alpha power compared with the MCS patients. Connectivity measures were correlated with the CRS-R diagnosis; patients in the VS/UWS had significantly lower connectivity than MCS patients in the theta and alpha bands. Standard EEG recorded in clinical conditions could be used as a tool to help the clinician in the diagnosis of disorders of consciousness. PMID:22687166

  4. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  5. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  6. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  7. Rethinking critical reflection on care: late modern uncertainty and the implications for care ethics.

    PubMed

    Vosman, Frans; Niemeijer, Alistair

    2017-12-01

    Care ethics as initiated by Gilligan, Held, Tronto and others (in the nineteen eighties and nineties) has from its onset been critical towards ethical concepts established in modernity, like 'autonomy', alternatively proposing to think from within relationships and to pay attention to power. In this article the question is raised whether renewal in this same critical vein is necessary and possible as late modern circumstances require rethinking the care ethical inquiry. Two late modern realities that invite to rethink care ethics are complexity and precariousness. Late modern organizations, like the general hospital, codetermined by various (control-, information-, safety-, accountability-) systems are characterized by complexity and the need for complexity reduction, both permeating care practices. By means of a heuristic use of the concept of precariousness, taken as the installment of uncertainty, it is shown that relations and power in late modern care organizations have changed, precluding the use of a straightforward domination idea of power. In the final section a proposition is made how to rethink the care ethical inquiry in order to take late modern circumstances into account: inquiry should always be related to the concerns of people and practitioners from within care practices.

  8. cudaMap: a GPU accelerated program for gene expression connectivity mapping.

    PubMed

    McArt, Darragh G; Bankhead, Peter; Dunne, Philip D; Salto-Tellez, Manuel; Hamilton, Peter; Zhang, Shu-Dong

    2013-10-11

    Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Emerging 'omics' technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap.

  9. Representation-Independent Iteration of Sparse Data Arrays

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    An approach is defined that describes a method of iterating over massively large arrays containing sparse data using an approach that is implementation independent of how the contents of the sparse arrays are laid out in memory. What is unique and important here is the decoupling of the iteration over the sparse set of array elements from how they are internally represented in memory. This enables this approach to be backward compatible with existing schemes for representing sparse arrays as well as new approaches. What is novel here is a new approach for efficiently iterating over sparse arrays that is independent of the underlying memory layout representation of the array. A functional interface is defined for implementing sparse arrays in any modern programming language with a particular focus for the Chapel programming language. Examples are provided that show the translation of a loop that computes a matrix vector product into this representation for both the distributed and not-distributed cases. This work is directly applicable to NASA and its High Productivity Computing Systems (HPCS) program that JPL and our current program are engaged in. The goal of this program is to create powerful, scalable, and economically viable high-powered computer systems suitable for use in national security and industry by 2010. This is important to NASA for its computationally intensive requirements for analyzing and understanding the volumes of science data from our returned missions.

  10. 7. VIEW OF THE MODERN SUBSTATION (FOREGROUND), WITH THE OLD ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VIEW OF THE MODERN SUBSTATION (FOREGROUND), WITH THE OLD SWITCHING BUILDING IN THE BACKGROUND, LOOKING SOUTHWEST. - Washington Water Power Company Post Falls Power Plant, Middle Channel Powerhouse & Dam, West of intersection of Spokane & Fourth Streets, Post Falls, Kootenai County, ID

  11. Architectural Techniques For Managing Non-volatile Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    As chip power dissipation becomes a critical challenge in scaling processor performance, computer architects are forced to fundamentally rethink the design of modern processors and hence, the chip-design industry is now at a major inflection point in its hardware roadmap. The high leakage power and low density of SRAM poses serious obstacles in its use for designing large on-chip caches and for this reason, researchers are exploring non-volatile memory (NVM) devices, such as spin torque transfer RAM, phase change RAM and resistive RAM. However, since NVMs are not strictly superior to SRAM, effective architectural techniques are required for making themmore » a universal memory solution. This book discusses techniques for designing processor caches using NVM devices. It presents algorithms and architectures for improving their energy efficiency, performance and lifetime. It also provides both qualitative and quantitative evaluation to help the reader gain insights and motivate them to explore further. This book will be highly useful for beginners as well as veterans in computer architecture, chip designers, product managers and technical marketing professionals.« less

  12. Wireless power transfer inspired by the modern trends in electromagnetics

    NASA Astrophysics Data System (ADS)

    Song, Mingzhao; Belov, Pavel; Kapitanova, Polina

    2017-06-01

    Since the beginning of the 20th century, researchers have been looking for an effective way to transfer power without wired connections, but the wireless power transfer technology started to attract extensive interest from the industry side only in 2007 when the first smartphone was released and a consumer electronics revolution was triggered. Currently, the modern technology of wireless power transfer already has a rich research and development history as well as outstanding advances in commercialization. This review is focused on the description of distinctive implementations of this technology inspired by the modern trends in electrodynamics. We compare the performances of the power transfer systems based on three kinds of resonators, i.e., metallic coil resonators, dielectric resonators, and cavity mode resonators. We argue that metamaterials and meta-atoms are powerful tools to improve the functionalities and to obtain novel properties of the systems. We review different approaches to enhance the functionality of the wireless power transfer systems including control of the power transfer path and increase of the operation range and efficiency. Various applications of wireless power transfer are discussed and currently available standards are reviewed.

  13. Optimal Load Shedding and Generation Rescheduling for Overload Suppression in Large Power Systems.

    NASA Astrophysics Data System (ADS)

    Moon, Young-Hyun

    Ever-increasing size, complexity and operation costs in modern power systems have stimulated the intensive study of an optimal Load Shedding and Generator Rescheduling (LSGR) strategy in the sense of a secure and economic system operation. The conventional approach to LSGR has been based on the application of LP (Linear Programming) with the use of an approximately linearized model, and the LP algorithm is currently considered to be the most powerful tool for solving the LSGR problem. However, all of the LP algorithms presented in the literature essentially lead to the following disadvantages: (i) piecewise linearization involved in the LP algorithms requires the introduction of a number of new inequalities and slack variables, which creates significant burden to the computing facilities, and (ii) objective functions are not formulated in terms of the state variables of the adopted models, resulting in considerable numerical inefficiency in the process of computing the optimal solution. A new approach is presented, based on the development of a new linearized model and on the application of QP (Quadratic Programming). The changes in line flows as a result of changes to bus injection power are taken into account in the proposed model by the introduction of sensitivity coefficients, which avoids the mentioned second disadvantages. A precise method to calculate these sensitivity coefficients is given. A comprehensive review of the theory of optimization is included, in which results of the development of QP algorithms for LSGR as based on Wolfe's method and Kuhn -Tucker theory are evaluated in detail. The validity of the proposed model and QP algorithms has been verified and tested on practical power systems, showing the significant reduction of both computation time and memory requirements as well as the expected lower generation costs of the optimal solution as compared with those obtained from computing the optimal solution with LP. Finally, it is noted that an efficient reactive power compensation algorithm is developed to suppress voltage disturbances due to load sheddings, and that a new method for multiple contingency simulation is presented.

  14. [Data collection in anesthesia. Experiences with the inauguration of a new information system].

    PubMed

    Zbinden, A M; Rothenbühler, H; Häberli, B

    1997-06-01

    In many institutions information systems are used to process off-line anaesthesia data for invoices, statistical purposes, and quality assurance. Information systems are also increasingly being used to improve process control in order to reduce costs. Most of today's systems were created when information technology and working processes in anaesthesia were very different from those in use today. Thus, many institutions must now replace their computer systems but are probably not aware of how complex this change will be. Modern information systems mostly use client-server architecture and relational data bases. Substituting an old system with a new one is frequently a greater task than designing a system from scratch. This article gives the conclusions drawn from the experience obtained when a large departmental computer system is redesigned in an university hospital. The new system was based on a client-server architecture and was developed by an external company without preceding conceptual analysis. Modules for patient, anaesthesia, surgical, and pain-service data were included. Data were analysed using a separate statistical package (RS/1 from Bolt Beranek), taking advantage of its powerful precompiled procedures. Development and introduction of the new system took much more time and effort than expected despite the use of modern software tools. Introduction of the new program required intensive user training despite the choice of modem graphic screen layouts. Automatic data-reading systems could not be used, as too many faults occurred and the effort for the user was too high. However, after the initial problems were solved the system turned out to be a powerful tool for quality control (both process and outcome quality), billing, and scheduling. The statistical analysis of the data resulted in meaningful and relevant conclusions. Before creating a new information system, the working processes have to be analysed and, if possible, made more efficient; a detailed programme specification must then be made. A servicing and maintenance contract should be drawn up before the order is given to a company. Time periods of equal duration have to be scheduled for defining, writing, testing and introducing the program. Modern client-server systems with relational data bases are by no means simpler to establish and maintain than previous mainframe systems with hierarchical data bases, and thus, experienced computer specialists need to be close at hand. We recommend collecting data only once for both statistics and quality control. To verify data quality, a system of random spot-sampling has to be established. Despite the large investments needed to build up such a system, we consider it a powerful tool for helping to solve the difficult daily problems of managing a surgical and anaesthesia unit.

  15. Toward a Post-Modern Agenda in Instructional Technology.

    ERIC Educational Resources Information Center

    Solomon, David L.

    2000-01-01

    Discusses the concept of post-modernism and relates it to the field of instructional technology. Topics include structuralism; semiotics; poststructuralism; deconstruction; knowledge and power; critical theory; self-concept; post-modern assumptions; and potential contributions of post-modern concepts in instructional technology. (Contains 80…

  16. A learnable parallel processing architecture towards unity of memory and computing

    NASA Astrophysics Data System (ADS)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  17. A learnable parallel processing architecture towards unity of memory and computing.

    PubMed

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  18. Geothermal pilot study final report: creating an international geothermal energy community

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bresee, J.C.; Yen, W.W.S.; Metzler, J.E.

    The Geothermal Pilot Study under the auspices of the Committee on the Challenges of Modern Society (CCMS) was established in 1973 to apply an action-oriented approach to international geothermal research and development, taking advantage of the established channels of governmental communication provided by the North Atlantic Treaty Organization (NATO). The Pilot Study was composed of five substudies. They included: computer-based information systems; direct application of geothermal energy; reservoir assessment; small geothermal power plants; and hot dry rock concepts. The most significant overall result of the CCMS Geothermal Pilot Study, which is now complete, is the establishment of an identifiable communitymore » of geothermal experts in a dozen or more countries active in development programs. Specific accomplishments include the creation of an international computer file of technical information on geothermal wells and fields, the development of studies and reports on direct applications, geothermal fluid injection and small power plants, and the operation of the visiting scientist program. In the United States, the computer file has aready proven useful in the development of reservoir models and of chemical geothermometers. The state-of-the-art report on direct uses of geothermal energy is proving to be a valuable resource document for laypersons and experts in an area of increasing interest to many countries. Geothermal fluid injection studies in El Salvador, New Zealand, and the United States have been assisted by the Reservoir Assessment Substudy and have led to long-range reservoir engineering studies in Mexico. At least seven small geothermal power plants are in use or have been planned for construction around the world since the Small Power Plant Substudy was instituted--at least partial credit for this increased application can be assigned to the CCMS Geothermal Pilot Study. (JGB)« less

  19. Bildung as a Powerful Tool in Modern University Teaching

    ERIC Educational Resources Information Center

    Olesen, Mogens Noergaard

    2009-01-01

    In this paper we will demonstrate how powerful "Bildung" is as a tool in modern university teaching. The concept of "Bildung" was originally introduced by the German philosopher Immanuel Kant (Kant 1787, 1798, 1804) and the Prussian lawyer and politician Wilhelm von Humboldt (Humboldt 1792, Bohlin 2008). From 1810…

  20. Optimizing Engineering Tools Using Modern Ground Architectures

    DTIC Science & Technology

    2017-12-01

    Considerations,” International Journal of Computer Science & Engineering Survey , vol. 5, no. 4, 2014. [10] R. Bell. (n.d). A beginner’s guide to big O notation...scientific community. Traditional computing architectures were not capable of processing the data efficiently, or in some cases, could not process the...thesis investigates how these modern computing architectures could be leveraged by industry and academia to improve the performance and capabilities of

  1. Constructs of power and equity and their association with contraceptive use among men and women in rural Ethiopia and Kenya.

    PubMed

    Stephenson, Rob; Bartel, Doris; Rubardt, Marcie

    2012-01-01

    Using samples of reproductive aged men and women from rural Ethiopia and Kenya, this study examines the associations between two scales measuring balances of power and equitable attitudes within relationships and modern contraceptive use. The scales are developed from the Sexual and Reproductive Power Scale (SRPS) and Gender Equitable Male (GEM) scale, which were originally developed to measure relationship power (SRPS) among women and gender equitable attitudes (GEM) among men. With the exception of Ethiopian women, a higher score on the balance of power scale was associated with significantly higher odds of reporting modern contraceptive use. For men and women in both countries, a higher score on the equitable attitudes scale was associated with significantly higher odds of reporting modern contraceptive use. However, only the highest categories of the scales are associated with contraceptive use, suggesting a threshold effect in the relationships between power, equity and contraceptive use. The results presented here demonstrate how elements of the GEM and SRPS scales can be used to create scales measuring balances of power and equitable attitudes within relationships that are associated with self-reporting of modern contraceptive use in two resource-poor settings. However, further work with larger sample sizes is needed to confirm these findings, and to examine the extent to which these scales can be applied to other social and cultural contexts.

  2. Algebraic geometry and Bethe ansatz. Part I. The quotient ring for BAE

    NASA Astrophysics Data System (ADS)

    Jiang, Yunfeng; Zhang, Yang

    2018-03-01

    In this paper and upcoming ones, we initiate a systematic study of Bethe ansatz equations for integrable models by modern computational algebraic geometry. We show that algebraic geometry provides a natural mathematical language and powerful tools for understanding the structure of solution space of Bethe ansatz equations. In particular, we find novel efficient methods to count the number of solutions of Bethe ansatz equations based on Gröbner basis and quotient ring. We also develop analytical approach based on companion matrix to perform the sum of on-shell quantities over all physical solutions without solving Bethe ansatz equations explicitly. To demonstrate the power of our method, we revisit the completeness problem of Bethe ansatz of Heisenberg spin chain, and calculate the sum rules of OPE coefficients in planar N=4 super-Yang-Mills theory.

  3. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    NASA Astrophysics Data System (ADS)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  4. Clocks to Computers: A Machine-Based “Big Picture” of the History of Modern Science.

    PubMed

    van Lunteren, Frans

    2016-12-01

    Over the last few decades there have been several calls for a “big picture” of the history of science. There is a general need for a concise overview of the rise of modern science, with a clear structure allowing for a rough division into periods. This essay proposes such a scheme, one that is both elementary and comprehensive. It focuses on four machines, which can be seen to have mediated between science and society during successive periods of time: the clock, the balance, the steam engine, and the computer. Following an extended developmental phase, each of these machines came to play a highly visible role in Western societies, both socially and economically. Each of these machines, moreover, was used as a powerful resource for the understanding of both inorganic and organic nature. More specifically, their metaphorical use helped to construe and refine some key concepts that would play a prominent role in such understanding. In each case the key concept would at some point be considered to represent the ultimate building block of reality. Finally, in a refined form, each of these machines would eventually make its entry in scientific research, thereby strengthening the ties between these machines and nature.

  5. Comprehensive assessment of the effective scope of modernization of thermal power plants to substantiate the rational structure of the generating capacities for the future until 2035

    NASA Astrophysics Data System (ADS)

    Veselov, F. V.; Erokhina, I. V.; Makarova, A. S.; Khorshev, A. A.

    2017-03-01

    The article deals with issues of technical and economic substantiation of priorities and scopes of modernizing the existing thermal power plants (TPPs) in Russia to work out long-term forecasts of the development of the industry. The current situation in the TPP modernization trends is analyzed. The updated initial figures of the capital and operation costs are presented and the obtained estimates of the comparative efficiency of various investment decisions on modernization and equipment replacement at gas-and-oil-burning and coal-fired TPPs with regard to the main zones of the national Unified Power System (UPS) of Russia are cited. The results of optimization of the generating capacity structure underlie a study of alternative TPP modernization strategies that differ in the scope of switching to new technologies, capital intensity, and energy efficiency (decrease in the average heat rate). To provide an integral economic assessment of the above strategies, the authors modified the traditional approach based on determination of the overall discounted costs of power supply (least-cost planning) supplemented with a comparison by the weighted average wholesale price of the electricity. A method for prediction of the wholesale price is proposed reasoning from the direct and dual solutions of the optimization problem. The method can be adapted to various combinations of the mechanisms of payment for the electricity and the capacity on the basis of marginal and average costs. Energy and economic analysis showed that the opposite effects of reduction in the capital investment and fuel saving change in a nonlinear way as the scope of the switch to more advanced power generation technologies at the TPPs increases. As a consequence, a strategy for modernization of the existing power plants rational with respect to total costs of the power supply and wholesale electricity prices has been formulated. The strategy combines decisions on upgrade and replacement of the equipment at the existing power plants of various types. The basic parameters of the strategy for the future until 2035 are provided.

  6. A Pilot Study Investigating the Effects of Advanced Nuclear Power Plant Control Room Technologies: Methods and Qualitative Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BLanc, Katya Le; Powers, David; Joe, Jeffrey

    2015-08-01

    Control room modernization is an important part of life extension for the existing light water reactor fleet. None of the 99 currently operating commercial nuclear power plants in the U.S. has completed a full-scale control room modernization to date. Nuclear power plant main control rooms for the existing commercial reactor fleet remain significantly analog, with only limited digital modernizations. Upgrades in the U.S. do not achieve the full potential of newer technologies that might otherwise enhance plant and operator performance. The goal of the control room upgrade benefits research is to identify previously overlooked benefits of modernization, identify candidate technologiesmore » that may facilitate such benefits, and demonstrate these technologies through human factors research. This report describes a pilot study to test upgrades to the Human Systems Simulation Laboratory at INL.« less

  7. Geographically distributed real-time digital simulations using linear prediction

    DOE PAGES

    Liu, Ren; Mohanpurkar, Manish; Panwar, Mayank; ...

    2016-07-04

    Real time simulation is a powerful tool for analyzing, planning, and operating modern power systems. For analyzing the ever evolving power systems and understanding complex dynamic and transient interactions larger real time computation capabilities are essential. These facilities are interspersed all over the globe and to leverage unique facilities geographically-distributed real-time co-simulation in analyzing the power systems is pursued and presented. However, the communication latency between different simulator locations may lead to inaccuracy in geographically distributed real-time co-simulations. In this paper, the effect of communication latency on geographically distributed real-time co-simulation is introduced and discussed. In order to reduce themore » effect of the communication latency, a real-time data predictor, based on linear curve fitting is developed and integrated into the distributed real-time co-simulation. Two digital real time simulators are used to perform dynamic and transient co-simulations with communication latency and predictor. Results demonstrate the effect of the communication latency and the performance of the real-time data predictor to compensate it.« less

  8. Geographically distributed real-time digital simulations using linear prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ren; Mohanpurkar, Manish; Panwar, Mayank

    Real time simulation is a powerful tool for analyzing, planning, and operating modern power systems. For analyzing the ever evolving power systems and understanding complex dynamic and transient interactions larger real time computation capabilities are essential. These facilities are interspersed all over the globe and to leverage unique facilities geographically-distributed real-time co-simulation in analyzing the power systems is pursued and presented. However, the communication latency between different simulator locations may lead to inaccuracy in geographically distributed real-time co-simulations. In this paper, the effect of communication latency on geographically distributed real-time co-simulation is introduced and discussed. In order to reduce themore » effect of the communication latency, a real-time data predictor, based on linear curve fitting is developed and integrated into the distributed real-time co-simulation. Two digital real time simulators are used to perform dynamic and transient co-simulations with communication latency and predictor. Results demonstrate the effect of the communication latency and the performance of the real-time data predictor to compensate it.« less

  9. Traditional Tracking with Kalman Filter on Parallel Architectures

    NASA Astrophysics Data System (ADS)

    Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; MacNeill, Ian; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2015-05-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The most common track finding techniques in use today are however those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. We report the results of our investigations into the potential and limitations of these algorithms on the new parallel hardware.

  10. Proposals for the implementation of the variants of automatic control of the telescope AZT-2

    NASA Astrophysics Data System (ADS)

    Shavlovskyi, V. I.; Puha, S. P.; Vidmachenko, A. P.; Volovyk, D. V.; Puha, G. P.; Obolonskyi, V. O.; Kratko, O. O.; Stefurak, M. V.

    2018-05-01

    Based on the experience of astronomical observations, structural features and results of the review of the technical state of the mechanism of the telescope AZT-2 in the Main Astronomical Observatory of NAS of Ukraine, in 2012 it was decided to carry out works on its modernization. To this end, it was suggested that the telescope control system should consist of angle sensors on the time axis "alpha" and the axis "delta", personal computer (PC), corresponding software, power control unit, and rotation system of telescope. The angle sensor should be absolute, with a resolution of better than 10 angular minutes. The PC should perform the functions of data processing from the angle sensor, and control the power node. The developed software allows the operator to direct the telescope in an automatic mode, and to set the necessary parameters of the system. With using of PC, the power control node will directly control the engine of the rotation system.

  11. A GPU-Based Architecture for Real-Time Data Assessment at Synchrotron Experiments

    NASA Astrophysics Data System (ADS)

    Chilingaryan, Suren; Mirone, Alessandro; Hammersley, Andrew; Ferrero, Claudio; Helfen, Lukas; Kopmann, Andreas; Rolo, Tomy dos Santos; Vagovic, Patrik

    2011-08-01

    Advances in digital detector technology leads presently to rapidly increasing data rates in imaging experiments. Using fast two-dimensional detectors in computed tomography, the data acquisition can be much faster than the reconstruction if no adequate measures are taken, especially when a high photon flux at synchrotron sources is used. We have optimized the reconstruction software employed at the micro-tomography beamlines of our synchrotron facilities to use the computational power of modern graphic cards. The main paradigm of our approach is the full utilization of all system resources. We use a pipelined architecture, where the GPUs are used as compute coprocessors to reconstruct slices, while the CPUs are preparing the next ones. Special attention is devoted to minimize data transfers between the host and GPU memory and to execute memory transfers in parallel with the computations. We were able to reduce the reconstruction time by a factor 30 and process a typical data set of 20 GB in 40 seconds. The time needed for the first evaluation of the reconstructed sample is reduced significantly and quasi real-time visualization is now possible.

  12. Randomized Dynamic Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.

  13. Biophysics and systems biology.

    PubMed

    Noble, Denis

    2010-03-13

    Biophysics at the systems level, as distinct from molecular biophysics, acquired its most famous paradigm in the work of Hodgkin and Huxley, who integrated their equations for the nerve impulse in 1952. Their approach has since been extended to other organs of the body, notably including the heart. The modern field of computational biology has expanded rapidly during the first decade of the twenty-first century and, through its contribution to what is now called systems biology, it is set to revise many of the fundamental principles of biology, including the relations between genotypes and phenotypes. Evolutionary theory, in particular, will require re-assessment. To succeed in this, computational and systems biology will need to develop the theoretical framework required to deal with multilevel interactions. While computational power is necessary, and is forthcoming, it is not sufficient. We will also require mathematical insight, perhaps of a nature we have not yet identified. This article is therefore also a challenge to mathematicians to develop such insights.

  14. Biophysics and systems biology

    PubMed Central

    Noble, Denis

    2010-01-01

    Biophysics at the systems level, as distinct from molecular biophysics, acquired its most famous paradigm in the work of Hodgkin and Huxley, who integrated their equations for the nerve impulse in 1952. Their approach has since been extended to other organs of the body, notably including the heart. The modern field of computational biology has expanded rapidly during the first decade of the twenty-first century and, through its contribution to what is now called systems biology, it is set to revise many of the fundamental principles of biology, including the relations between genotypes and phenotypes. Evolutionary theory, in particular, will require re-assessment. To succeed in this, computational and systems biology will need to develop the theoretical framework required to deal with multilevel interactions. While computational power is necessary, and is forthcoming, it is not sufficient. We will also require mathematical insight, perhaps of a nature we have not yet identified. This article is therefore also a challenge to mathematicians to develop such insights. PMID:20123750

  15. Towards a diachronic-synchronic view of future communication policies in Africa.

    PubMed

    Wilson, D

    1989-01-01

    Democratic communication policy Africa cannot be exploited by nations that have a technological edge. Traditional, indigenous systems have to be analyzed by a diachronic-synchronic study to establish more effective communication systems for Africa. It is proposed in this diachronic- synchronic view that communication is a cultural transaction and transmission taking place over time. Old processes are synchronized with modern technology resulting in the closure of the cultural gap between traditional and modern societies. Traditional communication processes have been ignored for too long and communication as an industry has become an elite enterprise boasting expensive gadgets (television and radio, newspapers and magazines). Communication is culture, since it is a manifestation of the cultural norms of society. This monopolistic pattern of cultural or media imperialism is pervasive in the Third World. Communication theory is a multidisciplinary field which strongly reflects the music, art, religion, etc. of the society. The communication of religion has become a big cultural activity worldwide, and televangelism is a significant feature of it, although it strictly controlled in Nigeria despite the pervasiveness of religion in society. The traditional communication system consists of the village council of elders and chiefs and the gongman as a broadcaster or reporter. Modern communication systems utilize television, satellite systems, and computers. These powerful instruments, however, have become liabilities in Third World countries. Dominating powers sell these instruments of power and culture to a people who pay for the equipment by foreign loans. A very grave communication problem in Africa is the low literacy rate and the preeminence of foreign languages as the only medium for reaching majority of the literate public. The culture of the poorer nations is often distorted. Exhibition of Africa culture is rare, and Western culture is transmitted to poor nations through satellite systems. Perhaps a moratorium on all cultural, technological and media hardware of little value to the national objective should be declared to be followed by a renewal of faith in an authentic, indigenous culture.

  16. It's Indisputable: Five Facts About Planning and Operating Modern Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloom, Aaron; Helman, Udi; Holttinen, Hannele

    An indisputable fact cannot be rebutted. It is supported by theory and experience. Over the past 25 years, wind and solar generation has undergone dramatic growth, resulting in a variety of experiences that model the integration of wind and solar into the planning and operation of modern electric power systems. In this article, we bring together examples from Europe, North America, and Australia to identify five indisputable facts about planning and operating modern power systems. Taken together, we hope these experiences can help build consensus among the engineering and public policy communities about the current state of wind and solarmore » integration and also facilitate conversations about evolving future challenges.« less

  17. A Research Framework for Demonstrating Benefits of Advanced Control Room Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Blanc, Katya; Boring, Ronald; Joe, Jeffrey

    Control Room modernization is an important part of life extension for the existing light water reactor fleet. None of the 99 currently operating commercial nuclear power plants in the U.S. has completed a full-scale control room modernization to date. A full-scale modernization might, for example, entail replacement of all analog panels with digital workstations. Such modernizations have been undertaken successfully in upgrades in Europe and Asia, but the U.S. has yet to undertake a control room upgrade of this magnitude. Instead, nuclear power plant main control rooms for the existing commercial reactor fleet remain significantly analog, with only limited digitalmore » modernizations. Previous research under the U.S. Department of Energy’s Light Water Reactor Sustainability Program has helped establish a systematic process for control room upgrades that support the transition to a hybrid control. While the guidance developed to date helps streamline the process of modernization and reduce costs and uncertainty associated with introducing digital control technologies into an existing control room, these upgrades do not achieve the full potential of newer technologies that might otherwise enhance plant and operator performance. The aim of the control room benefits research presented here is to identify previously overlooked benefits of modernization, identify candidate technologies that may facilitate such benefits, and demonstrate these technologies through human factors research. This report serves as an outline for planned research on the benefits of greater modernization in the main control rooms of nuclear power plants.« less

  18. Computational membrane biophysics: From ion channel interactions with drugs to cellular function.

    PubMed

    Miranda, Williams E; Ngo, Van A; Perissinotti, Laura L; Noskov, Sergei Yu

    2017-11-01

    The rapid development of experimental and computational techniques has changed fundamentally our understanding of cellular-membrane transport. The advent of powerful computers and refined force-fields for proteins, ions, and lipids has expanded the applicability of Molecular Dynamics (MD) simulations. A myriad of cellular responses is modulated through the binding of endogenous and exogenous ligands (e.g. neurotransmitters and drugs, respectively) to ion channels. Deciphering the thermodynamics and kinetics of the ligand binding processes to these membrane proteins is at the heart of modern drug development. The ever-increasing computational power has already provided insightful data on the thermodynamics and kinetics of drug-target interactions, free energies of solvation, and partitioning into lipid bilayers for drugs. This review aims to provide a brief summary about modeling approaches to map out crucial binding pathways with intermediate conformations and free-energy surfaces for drug-ion channel binding mechanisms that are responsible for multiple effects on cellular functions. We will discuss post-processing analysis of simulation-generated data, which are then transformed to kinetic models to better understand the molecular underpinning of the experimental observables under the influence of drugs or mutations in ion channels. This review highlights crucial mathematical frameworks and perspectives on bridging different well-established computational techniques to connect the dynamics and timescales from all-atom MD and free energy simulations of ion channels to the physiology of action potentials in cellular models. This article is part of a Special Issue entitled: Biophysics in Canada, edited by Lewis Kay, John Baenziger, Albert Berghuis and Peter Tieleman. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Visualization and Interaction in Research, Teaching, and Scientific Communication

    NASA Astrophysics Data System (ADS)

    Ammon, C. J.

    2017-12-01

    Modern computing provides many tools for exploring observations, numerical calculations, and theoretical relationships. The number of options is, in fact, almost overwhelming. But the choices provide those with modest programming skills opportunities to create unique views of scientific information and to develop deeper insights into their data, their computations, and the underlying theoretical data-model relationships. I present simple examples of using animation and human-computer interaction to explore scientific data and scientific-analysis approaches. I illustrate how valuable a little programming ability can free scientists from the constraints of existing tools and can facilitate the development of deeper appreciation data and models. I present examples from a suite of programming languages ranging from C to JavaScript including the Wolfram Language. JavaScript is valuable for sharing tools and insight (hopefully) with others because it is integrated into one of the most powerful communication tools in human history, the web browser. Although too much of that power is often spent on distracting advertisements, the underlying computation and graphics engines are efficient, flexible, and almost universally available in desktop and mobile computing platforms. Many are working to fulfill the browser's potential to become the most effective tool for interactive study. Open-source frameworks for visualizing everything from algorithms to data are available, but advance rapidly. One strategy for dealing with swiftly changing tools is to adopt common, open data formats that are easily adapted (often by framework or tool developers). I illustrate the use of animation and interaction in research and teaching with examples from earthquake seismology.

  20. Study of dynamics of X-14B VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Loscutoff, W. V.; Mitchiner, J. L.; Roesener, R. A.; Seevers, J. A.

    1973-01-01

    Research was initiated to investigate certain facets of modern control theory and their integration with a digital computer to provide a tractable flight control system for a VTOL aircraft. Since the hover mode is the most demanding phase in the operation of a VTOL aircraft, the research efforts were concentrated in this mode of aircraft operation. Research work on three different aspects of the operation of the X-14B VTOL aircraft is discussed. A general theory for optimal, prespecified, closed-loop control is developed. The ultimate goal was optimal decoupling of the modes of the VTOL aircraft to simplify the pilot's task of handling the aircraft. Modern control theory is used to design deterministic state estimators which provide state variables not measured directly, but which are needed for state variable feedback control. The effect of atmospheric turbulence on the X-14B is investigated. A maximum magnitude gust envelope within which the aircraft could operate stably with the available control power is determined.

  1. Self organising maps for visualising and modelling

    PubMed Central

    2012-01-01

    The paper describes the motivation of SOMs (Self Organising Maps) and how they are generally more accessible due to the wider available modern, more powerful, cost-effective computers. Their advantages compared to Principal Components Analysis and Partial Least Squares are discussed. These allow application to non-linear data, are not so dependent on least squares solutions, normality of errors and less influenced by outliers. In addition there are a wide variety of intuitive methods for visualisation that allow full use of the map space. Modern problems in analytical chemistry include applications to cultural heritage studies, environmental, metabolomic and biological problems result in complex datasets. Methods for visualising maps are described including best matching units, hit histograms, unified distance matrices and component planes. Supervised SOMs for classification including multifactor data and variable selection are discussed as is their use in Quality Control. The paper is illustrated using four case studies, namely the Near Infrared of food, the thermal analysis of polymers, metabolomic analysis of saliva using NMR, and on-line HPLC for pharmaceutical process monitoring. PMID:22594434

  2. Modern Computational Techniques for the HMMER Sequence Analysis

    PubMed Central

    2013-01-01

    This paper focuses on the latest research and critical reviews on modern computing architectures, software and hardware accelerated algorithms for bioinformatics data analysis with an emphasis on one of the most important sequence analysis applications—hidden Markov models (HMM). We show the detailed performance comparison of sequence analysis tools on various computing platforms recently developed in the bioinformatics society. The characteristics of the sequence analysis, such as data and compute-intensive natures, make it very attractive to optimize and parallelize by using both traditional software approach and innovated hardware acceleration technologies. PMID:25937944

  3. Beyond prometheus and Bakasura: Elements of an alternative to nuclear power in India's response to the energy-environment crisis

    NASA Astrophysics Data System (ADS)

    Mathai, Manu Verghese

    In India, as elsewhere, modern energy-society relations and economic development, metaphorically, Prometheus and the insatiable demon Bakasura, respectively, have produced unprecedented economic growth even as they have ushered in the "energy-environment crisis." Government efforts interpret the crisis as insufficiently advanced modernity. Resulting efforts to redress this crisis reaffirm more economic growth through modern energy-society relations and economic development. The civilian nuclear power renaissance in India, amidst rapidly accelerating economic growth and global climate change, is indicative. It presents the prospect of producing "abundant energy" and being "green" at the same time. This confidence in civilian nuclear power is questioned. It is investigated as proceeding from the modern discourse of "Cornucopianism" and its institutionalization as "modern megamachine organization of society." It is found that civilian nuclear power as energy policy is based on a presumption of overabundance as imperative for viable social and economic development; is predisposed to centralization and secrecy; its institutionalization limits deliberation on energy-society relations to technocratic terms; such deliberation is restrained to venues accessible only to the highest political office and technocratic elite; it fails to redress entrenched "energy injustice;" it embodies "modern technique" fostering the "displaced person" while eclipsing the "complete human personality." Overall, despite its green rhetoric, civilian nuclear power reaffirms the "politics of commodification" and refutes social and political arrangements for sustainability and equity. Alternatives are surveyed as strategies for resistance. They include the DEFENDUS approach for energy planning, the "Human Development and Capability Approach" and the "Sustainable Energy Utility." These alternatives and the synergy between them are offered as avenues to resist nuclear power as a response to the energy-environment crisis and to reclaim human-centered imagination and creativity for charting new realities free of the awe inspired by Prometheus and fear of the insatiable Bakasura.

  4. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    NASA Astrophysics Data System (ADS)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  5. Recovery Act: Energy Efficiency of Data Networks through Rate Adaptation (EEDNRA) - Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Andrews; Spyridon Antonakopoulos; Steve Fortune

    2011-07-12

    This Concept Definition Study focused on developing a scientific understanding of methods to reduce energy consumption in data networks using rate adaptation. Rate adaptation is a collection of techniques that reduce energy consumption when traffic is light, and only require full energy when traffic is at full provisioned capacity. Rate adaptation is a very promising technique for saving energy: modern data networks are typically operated at average rates well below capacity, but network equipment has not yet been designed to incorporate rate adaptation. The Study concerns packet-switching equipment, routers and switches; such equipment forms the backbone of the modern Internet.more » The focus of the study is on algorithms and protocols that can be implemented in software or firmware to exploit hardware power-control mechanisms. Hardware power-control mechanisms are widely used in the computer industry, and are beginning to be available for networking equipment as well. Network equipment has different performance requirements than computer equipment because of the very fast rate of packet arrival; hence novel power-control algorithms are required for networking. This study resulted in five published papers, one internal report, and two patent applications, documented below. The specific technical accomplishments are the following: • A model for the power consumption of switching equipment used in service-provider telecommunication networks as a function of operating state, and measured power-consumption values for typical current equipment. • An algorithm for use in a router that adapts packet processing rate and hence power consumption to traffic load while maintaining performance guarantees on delay and throughput. • An algorithm that performs network-wide traffic routing with the objective of minimizing energy consumption, assuming that routers have less-than-ideal rate adaptivity. • An estimate of the potential energy savings in service-provider networks using feasibly-implementable rate adaptivity. • A buffer-management algorithm that is designed to reduce the size of router buffers, and hence energy consumed. • A packet-scheduling algorithm designed to minimize packet-processing energy requirements. Additional research is recommended in at least two areas: further exploration of rate-adaptation in network switching equipment, including incorporation of rate-adaptation in actual hardware, allowing experimentation in operational networks; and development of control protocols that allow parts of networks to be shut down while minimizing disruption to traffic flow in the network. The research is an integral part of a large effort within Bell Laboratories, Alcatel-Lucent, aimed at dramatic improvements in the energy efficiency of telecommunication networks. This Study did not explicitly consider any commercialization opportunities.« less

  6. Energy Efficiency Maximization of Practical Wireless Communication Systems

    NASA Astrophysics Data System (ADS)

    Eraslan, Eren

    Energy consumption of the modern wireless communication systems is rapidly growing due to the ever-increasing data demand and the advanced solutions employed in order to address this demand, such as multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) techniques. These MIMO systems are power hungry, however, they are capable of changing the transmission parameters, such as number of spatial streams, number of transmitter/receiver antennas, modulation, code rate, and transmit power. They can thus choose the best mode out of possibly thousands of modes in order to optimize an objective function. This problem is referred to as the link adaptation problem. In this work, we focus on the link adaptation for energy efficiency maximization problem, which is defined as choosing the optimal transmission mode to maximize the number of successfully transmitted bits per unit energy consumed by the link. We model the energy consumption and throughput performances of a MIMO-OFDM link and develop a practical link adaptation protocol, which senses the channel conditions and changes its transmission mode in real-time. It turns out that the brute force search, which is usually assumed in previous works, is prohibitively complex, especially when there are large numbers of transmit power levels to choose from. We analyze the relationship between the energy efficiency and transmit power, and prove that energy efficiency of a link is a single-peaked quasiconcave function of transmit power. This leads us to develop a low-complexity algorithm that finds a near-optimal transmit power and take this dimension out of the search space. We further prune the search space by analyzing the singular value decomposition of the channel and excluding the modes that use higher number of spatial streams than the channel can support. These algorithms and our novel formulations provide simpler computations and limit the search space into a much smaller set; hence reducing the computational complexity by orders of magnitude without sacrificing the performance. The result of this work is a highly practical link adaptation protocol for maximizing the energy efficiency of modern wireless communication systems. Simulation results show orders of magnitude gain in the energy efficiency of the link. We also implemented the link adaptation protocol on real-time MIMO-OFDM radios and we report on the experimental results. To the best of our knowledge, this is the first reported testbed that is capable of performing energy-efficient fast link adaptation using PHY layer information.

  7. Generative models for clinical applications in computational psychiatry.

    PubMed

    Frässle, Stefan; Yao, Yu; Schöbi, Dario; Aponte, Eduardo A; Heinzle, Jakob; Stephan, Klaas E

    2018-05-01

    Despite the success of modern neuroimaging techniques in furthering our understanding of cognitive and pathophysiological processes, translation of these advances into clinically relevant tools has been virtually absent until now. Neuromodeling represents a powerful framework for overcoming this translational deadlock, and the development of computational models to solve clinical problems has become a major scientific goal over the last decade, as reflected by the emergence of clinically oriented neuromodeling fields like Computational Psychiatry, Computational Neurology, and Computational Psychosomatics. Generative models of brain physiology and connectivity in the human brain play a key role in this endeavor, striving for computational assays that can be applied to neuroimaging data from individual patients for differential diagnosis and treatment prediction. In this review, we focus on dynamic causal modeling (DCM) and its use for Computational Psychiatry. DCM is a widely used generative modeling framework for functional magnetic resonance imaging (fMRI) and magneto-/electroencephalography (M/EEG) data. This article reviews the basic concepts of DCM, revisits examples where it has proven valuable for addressing clinically relevant questions, and critically discusses methodological challenges and recent methodological advances. We conclude this review with a more general discussion of the promises and pitfalls of generative models in Computational Psychiatry and highlight the path that lies ahead of us. This article is categorized under: Neuroscience > Computation Neuroscience > Clinical Neuroscience. © 2018 Wiley Periodicals, Inc.

  8. Upgrading of the LGD cluster at JINR to support DLNP experiments

    NASA Astrophysics Data System (ADS)

    Bednyakov, I. V.; Dolbilov, A. G.; Ivanov, Yu. P.

    2017-01-01

    Since its construction in 2005, the Computing Cluster of the Dzhelepov Laboratory of Nuclear Problems has been mainly used to perform calculations (data analysis, simulation, etc.) for various scientific collaborations in which DLNP scientists take an active part. The Cluster also serves to train specialists. Much has changed in the past decades, and the necessity has arisen to upgrade the cluster, increasing its power and replacing the outdated equipment to maintain its reliability and modernity. In this work we describe the experience of performing this upgrading, which can be helpful for system administrators to put new equipment for clusters of this type into operation quickly and efficiently.

  9. Instituto Nacional de Electrification, Guatemala Load Dispatch Center and Global Communications Center. Feasibility report (Instituto Nacional de Electrificacion, Guatemala Centro Nacional de Despacho de Carga y Sistema Global de Comunicaciones). Export trade information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-07-01

    The report presents the results of a feasibility study for the National Load Dispatch Center and Global Communications System Project in Guatemala. The project consists of a communication system which will provide Institute Nacional de Electrificacion (INDE) operations personnel direct voice access to all major power system facilities. In addition, a modern computer based load dispatch center has been configured on a secure and reliable basis to provide automatic generation control of all major interconnected generating plants within Guatemala.

  10. The Place of Science in the Modern World: A Speech by Robert Millikan

    NASA Astrophysics Data System (ADS)

    Williams, Kathryn R.

    2001-07-01

    A speech by Robert Millikan, reprinted in the May 1930 issue, pertains to issues still prevalent in the 21st century. In the "The Place of Science in the Modern World", the Nobel laureate defends science against charges of its detrimental effects on society, its materialistic intentions, and the destructive powers realized during the first World War. He also expresses concern that "this particular generation of Americans" may lack the moral qualities needed to make responsible use of the increased powers afforded by modern science.

  11. Making Research Fly in Schools: "Drosophila" as a Powerful Modern Tool for Teaching Biology

    ERIC Educational Resources Information Center

    Harbottle, Jennifer; Strangward, Patrick; Alnuamaani, Catherine; Lawes, Surita; Patel, Sanjai; Prokop, Andreas

    2016-01-01

    The "droso4schools" project aims to introduce the fruit fly "Drosophila" as a powerful modern teaching tool to convey curriculum-relevant specifications in biology lessons. Flies are easy and cheap to breed and have been at the forefront of biology research for a century, providing unique conceptual understanding of biology and…

  12. Securing Secrets and Managing Trust in Modern Computing Applications

    ERIC Educational Resources Information Center

    Sayler, Andy

    2016-01-01

    The amount of digital data generated and stored by users increases every day. In order to protect this data, modern computing systems employ numerous cryptographic and access control solutions. Almost all of such solutions, however, require the keeping of certain secrets as the basis of their security models. How best to securely store and control…

  13. Implementation and Evaluation of Flipped Classroom as IoT Element into Learning Process of Computer Network Education

    ERIC Educational Resources Information Center

    Zhamanov, Azamat; Yoo, Seong-Moo; Sakhiyeva, Zhulduz; Zhaparov, Meirambek

    2018-01-01

    Students nowadays are hard to be motivated to study lessons with traditional teaching methods. Computers, smartphones, tablets and other smart devices disturb students' attentions. Nevertheless, those smart devices can be used as auxiliary tools of modern teaching methods. In this article, the authors review two popular modern teaching methods:…

  14. Shared Memory Parallelism for 3D Cartesian Discrete Ordinates Solver

    NASA Astrophysics Data System (ADS)

    Moustafa, Salli; Dutka-Malen, Ivan; Plagne, Laurent; Ponçot, Angélique; Ramet, Pierre

    2014-06-01

    This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multicore+SIMD) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46 × 106 spatial cells and 1 × 1012 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40:74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool.

  15. Tribological characteristic enhancement effects by polymer thickened oil in lubricated sliding contacts

    NASA Astrophysics Data System (ADS)

    Pratomo, Ariawan Wahyu; Muchammad, Tauviqirrahman, Mohammad; Jamari, Bayuseno, Athanasius P.

    2016-04-01

    Polymer thickened oils are the most preferred materials for modern lubrication applications due to their high shear. The present paper explores a lubrication mechanism in sliding contact lubricated with polymer thickened oil considering cavitation. Investigations are carried out by using a numerical method based on commercial CFD (computational fluid dynamic) software ANSYS for fluid flow phenomenon (Fluent) to assess the tribological characteristic (i.e. hydrodynamic pressure distribution) of lubricated sliding contact. The Zwart-Gerber-Belamri model for cavitation is adopted in this simulation to predict the extent of the full film region. The polymer thickened oil is characterized as non-Newtonian power-law fluid. The simulation results show that the cavitation lead lower pressure profile compared to that without cavitation. In addition, it is concluded that the characteristic of the lubrication performance with polymer thickened oil is strongly dependent on the Power-law index of lubricant.

  16. Fast Ordered Sampling of DNA Sequence Variants.

    PubMed

    Greenberg, Anthony J

    2018-05-04

    Explosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD) decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects. Copyright © 2018 Greenberg.

  17. Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU

    NASA Astrophysics Data System (ADS)

    Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis

    2016-06-01

    Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20x to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.

  18. Periodic control of the individual-blade-control helicopter rotor. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Mckillip, R. M., Jr.

    1984-01-01

    Results of an investigation into methods of controller design for an individual helicopter rotor blade in the high forward-flight speed regime are described. This operating condition poses a unique control problem in that the perturbation equations of motion are linear with coefficients that vary periodically with time. The design of a control law was based on extensions to modern multivariate synthesis techniques and incorporated a novel approach to the reconstruction of the missing system state variables. The controller was tested on both an electronic analog computer simulation of the out-of-plane flapping dynamics, and on a four foot diameter single-bladed model helicopter rotor in the M.I.T. 5x7 subsonic wind tunnel at high levels of advance ratio. It is shown that modal control using the IBC concept is possible over a large range of advance ratios with only a modest amount of computational power required.

  19. The quest for the perfect gravity anomaly: Part 1 - New calculation standards

    USGS Publications Warehouse

    Li, X.; Hildenbrand, T.G.; Hinze, W. J.; Keller, Gordon R.; Ravat, D.; Webring, M.

    2006-01-01

    The North American gravity database together with databases from Canada, Mexico, and the United States are being revised to improve their coverage, versatility, and accuracy. An important part of this effort is revision of procedures and standards for calculating gravity anomalies taking into account our enhanced computational power, modern satellite-based positioning technology, improved terrain databases, and increased interest in more accurately defining different anomaly components. The most striking revision is the use of one single internationally accepted reference ellipsoid for the horizontal and vertical datums of gravity stations as well as for the computation of the theoretical gravity. The new standards hardly impact the interpretation of local anomalies, but do improve regional anomalies. Most importantly, such new standards can be consistently applied to gravity database compilations of nations, continents, and even the entire world. ?? 2005 Society of Exploration Geophysicists.

  20. Application of Tessellation in Architectural Geometry Design

    NASA Astrophysics Data System (ADS)

    Chang, Wei

    2018-06-01

    Tessellation plays a significant role in architectural geometry design, which is widely used both through history of architecture and in modern architectural design with the help of computer technology. Tessellation has been found since the birth of civilization. In terms of dimensions, there are two- dimensional tessellations and three-dimensional tessellations; in terms of symmetry, there are periodic tessellations and aperiodic tessellations. Besides, some special types of tessellations such as Voronoi Tessellation and Delaunay Triangles are also included. Both Geometry and Crystallography, the latter of which is the basic theory of three-dimensional tessellations, need to be studied. In history, tessellation was applied into skins or decorations in architecture. The development of Computer technology enables tessellation to be more powerful, as seen in surface control, surface display and structure design, etc. Therefore, research on the application of tessellation in architectural geometry design is of great necessity in architecture studies.

  1. Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis

    Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20xmore » to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.« less

  2. The service telemetry and control device for space experiment “GRIS”

    NASA Astrophysics Data System (ADS)

    Glyanenko, A. S.

    2016-02-01

    Problems of scientific devices control (for example, fine control of measuring paths), collecting auxiliary (service information about working capacity, conditions of experiment carrying out, etc.) and preliminary data processing are actual for any space device. Modern devices for space research it is impossible to imagine without devices that didn't use digital data processing methods and specialized or standard interfaces and computing facilities. For realization of these functions in “GRIS” experiment onboard ISS for purposes minimization of dimensions, power consumption, the concept “system-on-chip” was chosen and realized. In the programmable logical integrated scheme by Microsemi from ProASIC3 family with maximum capacity up to 3M system gates, the computing kernel and all necessary peripherals are created. In this paper we discuss structure, possibilities and resources the service telemetry and control device for “GRIS” space experiment.

  3. Explorationists and dinosaurs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, W.S.

    1993-02-01

    The exploration industry is changing, exploration technology is changing and the explorationist's job is changing. Resource companies are diversifying internationally and their central organizations are providing advisors rather than services. As a result, the relationship between the resource company and the contractor is changing. Resource companies are promoting standards so that all contract services in all parts of the world will look the same to their advisors. Contractors, for competitive reasons, want to look [open quotes]different[close quotes] from other contractors. The resource companies must encourage competition between contractors to insure the availability of new technology but must also resist themore » current trend of burdening the contractor with more and more of the risk involved in exploration. It is becoming more and more obvious that geophysical expenditures represent the best [open quotes]value added[close quotes] expenditures in exploration and development budgets. As a result, seismic-related contractors represent the growth component of our industry. The predominant growth is in 3-D seismic technology, and this growth is being further propelled by the computational power of the new generation of massively parallel computers and by recent advances in computer graphic techniques. Interpretation of seismic data involves the analysis of wavelet shapes and amplitudes prior to stacking the data. Thus, modern interpretation involves understanding compressional waves, shear waves, and propagating modes which create noise and interference. Modern interpretation and processing are carried out simultaneously, iteratively, and interactively and involve many physics-related concepts. These concepts are not merely tools for the interpretation, they are the interpretation. Explorationists who do not recognize this fact are going the way of the dinosaurs.« less

  4. Improving robustness and computational efficiency using modern C++

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paterno, M.; Kowalkowski, J.; Green, C.

    2014-01-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In thismore » paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.« less

  5. Modern Air&Space Power and political goals at war

    NASA Astrophysics Data System (ADS)

    Özer, Güngör.

    2014-05-01

    Modern AirandSpace Power is increasingly becoming a political tool. In this article, AirandSpacePower as a political tool will be discussed. The primary purpose of this article is to search how AirandSpacePower can provide contributions to security and also determine if it may reach the political goals on its own at war by SWOT Analysis Method and analysing the role of AirandSpace Power in Operation Unified Protector (Libya) as a case study. In conclusion, AirandSpacePower may not be sufficient to win the political goals on its own. However it may reach the political aims partially against the adversary on its own depending upon the situations. Moreover it can alone persuade the adversary to alter its behavior(s) in war.

  6. cudaMap: a GPU accelerated program for gene expression connectivity mapping

    PubMed Central

    2013-01-01

    Background Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. Results cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Conclusion Emerging ‘omics’ technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap. PMID:24112435

  7. On the Emergence of Modern Humans

    ERIC Educational Resources Information Center

    Amati, Daniele; Shallice, Tim

    2007-01-01

    The emergence of modern humans with their extraordinary cognitive capacities is ascribed to a novel type of cognitive computational process (sustained non-routine multi-level operations) required for abstract projectuality, held to be the common denominator of the cognitive capacities specific to modern humans. A brain operation (latching) that…

  8. Computational ecology as an emerging science

    PubMed Central

    Petrovskii, Sergei; Petrovskaya, Natalia

    2012-01-01

    It has long been recognized that numerical modelling and computer simulations can be used as a powerful research tool to understand, and sometimes to predict, the tendencies and peculiarities in the dynamics of populations and ecosystems. It has been, however, much less appreciated that the context of modelling and simulations in ecology is essentially different from those that normally exist in other natural sciences. In our paper, we review the computational challenges arising in modern ecology in the spirit of computational mathematics, i.e. with our main focus on the choice and use of adequate numerical methods. Somewhat paradoxically, the complexity of ecological problems does not always require the use of complex computational methods. This paradox, however, can be easily resolved if we recall that application of sophisticated computational methods usually requires clear and unambiguous mathematical problem statement as well as clearly defined benchmark information for model validation. At the same time, many ecological problems still do not have mathematically accurate and unambiguous description, and available field data are often very noisy, and hence it can be hard to understand how the results of computations should be interpreted from the ecological viewpoint. In this scientific context, computational ecology has to deal with a new paradigm: conventional issues of numerical modelling such as convergence and stability become less important than the qualitative analysis that can be provided with the help of computational techniques. We discuss this paradigm by considering computational challenges arising in several specific ecological applications. PMID:23565336

  9. Architecture for an integrated real-time air combat and sensor network simulation

    NASA Astrophysics Data System (ADS)

    Criswell, Evans A.; Rushing, John; Lin, Hong; Graves, Sara

    2007-04-01

    An architecture for an integrated air combat and sensor network simulation is presented. The architecture integrates two components: a parallel real-time sensor fusion and target tracking simulation, and an air combat simulation. By integrating these two simulations, it becomes possible to experiment with scenarios in which one or both sides in a battle have very large numbers of primitive passive sensors, and to assess the likely effects of those sensors on the outcome of the battle. Modern Air Power is a real-time theater-level air combat simulation that is currently being used as a part of the USAF Air and Space Basic Course (ASBC). The simulation includes a variety of scenarios from the Vietnam war to the present day, and also includes several hypothetical future scenarios. Modern Air Power includes a scenario editor, an order of battle editor, and full AI customization features that make it possible to quickly construct scenarios for any conflict of interest. The scenario editor makes it possible to place a wide variety of sensors including both high fidelity sensors such as radars, and primitive passive sensors that provide only very limited information. The parallel real-time sensor network simulation is capable of handling very large numbers of sensors on a computing cluster of modest size. It can fuse information provided by disparate sensors to detect and track targets, and produce target tracks.

  10. KAGLVis - On-line 3D Visualisation of Earth-observing-satellite Data

    NASA Astrophysics Data System (ADS)

    Szuba, Marek; Ameri, Parinaz; Grabowski, Udo; Maatouki, Ahmad; Meyer, Jörg

    2015-04-01

    One of the goals of the Large-Scale Data Management and Analysis project is to provide a high-performance framework facilitating management of data acquired by Earth-observing satellites such as Envisat. On the client-facing facet of this framework, we strive to provide visualisation and basic analysis tool which could be used by scientists with minimal to no knowledge of the underlying infrastructure. Our tool, KAGLVis, is a JavaScript client-server Web application which leverages modern Web technologies to provide three-dimensional visualisation of satellite observables on a wide range of client systems. It takes advantage of the WebGL API to employ locally available GPU power for 3D rendering; this approach has been demonstrated to perform well even on relatively weak hardware such as integrated graphics chipsets found in modern laptop computers and with some user-interface tuning could even be usable on embedded devices such as smartphones or tablets. Data is fetched from the database back-end using a ReST API and cached locally, both in memory and using HTML5 Web Storage, to minimise network use. Computations, calculation of cloud altitude from cloud-index measurements for instance, can depending on configuration be performed on either the client or the server side. Keywords: satellite data, Envisat, visualisation, 3D graphics, Web application, WebGL, MEAN stack.

  11. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  12. Structural Weight Estimation for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd

    2002-01-01

    This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.

  13. Polish plant beats the odds to become model EU generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neville, A.

    2009-03-15

    Once a Soviet satellite, Poland is now transforming into a thoroughly modern nation. To support its growing economy, this recent European Union member country is modernizing its power industry. Exemplifying the advances in the Polish electricity generation market is the 460 MW Patnow II power plant - the largest, most efficient (supercritical cycle) and environmentally cleanest lignite-fired unit in the country. 3 photos.

  14. Evaluating Modern Defenses Against Control Flow Hijacking

    DTIC Science & Technology

    2015-09-01

    unsound and could introduce false negatives (opening up another possible set of attacks). CFG Construction using DSA We next evaluate the precision of CFG...Evaluating Modern Defenses Against Control Flow Hijacking by Ulziibayar Otgonbaatar Submitted to the Department of Electrical Engineering and...Computer Science in partial fulfillment of the requirements for the degree of Master of Science in Computer Science and Engineering at the MASSACHUSETTS

  15. Translations on Eastern Europe, Scientific Affairs, Number 542.

    DTIC Science & Technology

    1977-04-18

    transplanting human tissue has not as yet been given a final juridical approval like euthanasia, artificial insemination , abortion, birth control, and others...and data teleprocessing. This computer may also be used as a satellite computer for complex systems. The IZOT 310 has a large instruction...a well-known truth that modern science is using the most modern and leading technical facilities—from bathyscaphes to satellites , from gigantic

  16. Resiliency in Future Cyber Combat

    DTIC Science & Technology

    2016-04-04

    including the Internet , telecommunications networks, computer systems, and embed- ded processors and controllers.”6 One important point emerging from the...definition is that while the Internet is part of cyberspace, it is not all of cyberspace. Any computer processor capable of communicating with a...central proces- sor on a modern car are all part of cyberspace, although only some of them are routinely connected to the Internet . Most modern

  17. Synchronization Algorithms for Co-Simulation of Power Grid and Communication Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciraci, Selim; Daily, Jeffrey A.; Agarwal, Khushbu

    2014-09-11

    The ongoing modernization of power grids consists of integrating them with communication networks in order to achieve robust and resilient control of grid operations. To understand the operation of the new smart grid, one approach is to use simulation software. Unfortunately, current power grid simulators at best utilize inadequate approximations to simulate communication networks, if at all. Cooperative simulation of specialized power grid and communication network simulators promises to more accurately reproduce the interactions of real smart grid deployments. However, co-simulation is a challenging problem. A co-simulation must manage the exchange of informa- tion, including the synchronization of simulator clocks,more » between all simulators while maintaining adequate computational perfor- mance. This paper describes two new conservative algorithms for reducing the overhead of time synchronization, namely Active Set Conservative and Reactive Conservative. We provide a detailed analysis of their performance characteristics with respect to the current state of the art including both conservative and optimistic synchronization algorithms. In addition, we provide guidelines for selecting the appropriate synchronization algorithm based on the requirements of the co-simulation. The newly proposed algorithms are shown to achieve as much as 14% and 63% im- provement, respectively, over the existing conservative algorithm.« less

  18. High-performance finite-difference time-domain simulations of C-Mod and ITER RF antennas

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Smithe, David N.

    2015-12-01

    Finite-difference time-domain methods have, in recent years, developed powerful capabilities for modeling realistic ICRF behavior in fusion plasmas [1, 2, 3, 4]. When coupled with the power of modern high-performance computing platforms, such techniques allow the behavior of antenna near and far fields, and the flow of RF power, to be studied in realistic experimental scenarios at previously inaccessible levels of resolution. In this talk, we present results and 3D animations from high-performance FDTD simulations on the Titan Cray XK7 supercomputer, modeling both Alcator C-Mod's field-aligned ICRF antenna and the ITER antenna module. Much of this work focuses on scans over edge density, and tailored edge density profiles, to study dispersion and the physics of slow wave excitation in the immediate vicinity of the antenna hardware and SOL. An understanding of the role of the lower-hybrid resonance in low-density scenarios is emerging, and possible implications of this for the NSTX launcher and power balance are also discussed. In addition, we discuss ongoing work centered on using these simulations to estimate sputtering and impurity production, as driven by the self-consistent sheath potentials at antenna surfaces.

  19. High-performance finite-difference time-domain simulations of C-Mod and ITER RF antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, Thomas G., E-mail: tgjenkins@txcorp.com; Smithe, David N., E-mail: smithe@txcorp.com

    Finite-difference time-domain methods have, in recent years, developed powerful capabilities for modeling realistic ICRF behavior in fusion plasmas [1, 2, 3, 4]. When coupled with the power of modern high-performance computing platforms, such techniques allow the behavior of antenna near and far fields, and the flow of RF power, to be studied in realistic experimental scenarios at previously inaccessible levels of resolution. In this talk, we present results and 3D animations from high-performance FDTD simulations on the Titan Cray XK7 supercomputer, modeling both Alcator C-Mod’s field-aligned ICRF antenna and the ITER antenna module. Much of this work focuses on scansmore » over edge density, and tailored edge density profiles, to study dispersion and the physics of slow wave excitation in the immediate vicinity of the antenna hardware and SOL. An understanding of the role of the lower-hybrid resonance in low-density scenarios is emerging, and possible implications of this for the NSTX launcher and power balance are also discussed. In addition, we discuss ongoing work centered on using these simulations to estimate sputtering and impurity production, as driven by the self-consistent sheath potentials at antenna surfaces.« less

  20. ESIF 2016: Modernizing Our Grid and Energy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Becelaere, Kimberly

    This 2016 annual report highlights work conducted at the Energy Systems Integration Facility (ESIF) in FY 2016, including grid modernization, high-performance computing and visualization, and INTEGRATE projects.

  1. DICOMGrid: a middleware to integrate PACS and EELA-2 grid infrastructure

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; de Sá Rebelo, Marina; Gutierrez, Marco A.

    2010-03-01

    Medical images provide lots of information for physicians, but the huge amount of data produced by medical image equipments in a modern Health Institution is not completely explored in its full potential yet. Nowadays medical images are used in hospitals mostly as part of routine activities while its intrinsic value for research is underestimated. Medical images can be used for the development of new visualization techniques, new algorithms for patient care and new image processing techniques. These research areas usually require the use of huge volumes of data to obtain significant results, along with enormous computing capabilities. Such qualities are characteristics of grid computing systems such as EELA-2 infrastructure. The grid technologies allow the sharing of data in large scale in a safe and integrated environment and offer high computing capabilities. In this paper we describe the DicomGrid to store and retrieve medical images, properly anonymized, that can be used by researchers to test new processing techniques, using the computational power offered by grid technology. A prototype of the DicomGrid is under evaluation and permits the submission of jobs into the EELA-2 grid infrastructure while offering a simple interface that requires minimal understanding of the grid operation.

  2. Computer Technology: State of the Art.

    ERIC Educational Resources Information Center

    Withington, Frederic G.

    1981-01-01

    Describes the nature of modern general-purpose computer systems, including hardware, semiconductor electronics, microprocessors, computer architecture, input output technology, and system control programs. Seven suggested readings are cited. (FM)

  3. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).

  4. The "Handling" of power in the physician-patient encounter: perceptions from experienced physicians.

    PubMed

    Nimmon, Laura; Stenfors-Hayes, Terese

    2016-04-18

    Modern healthcare is burgeoning with patient centered rhetoric where physicians "share power" equally in their interactions with patients. However, how physicians actually conceptualize and manage their power when interacting with patients remains unexamined in the literature. This study explored how power is perceived and exerted in the physician-patient encounter from the perspective of experienced physicians. It is necessary to examine physicians' awareness of power in the context of modern healthcare that espouses values of dialogic, egalitarian, patient centered care. Thirty physicians with a minimum five years' experience practicing medicine in the disciplines of Internal Medicine, Surgery, Pediatrics, Psychiatry and Family Medicine were recruited. The authors analyzed semi-structured interview data using LeCompte and Schensul's three stage process: Item analysis, Pattern analysis, and Structural analysis. Theoretical notions from Bourdieu's social theory served as analytic tools for achieving an understanding of physicians' perceptions of power in their interactions with patients. The analysis of data highlighted a range of descriptions and interpretations of relational power. Physicians' responses fell under three broad categories: (1) Perceptions of holding and managing power, (2) Perceptions of power as waning, and (3) Perceptions of power as non-existent or irrelevant. Although the "sharing of power" is an overarching goal of modern patient-centered healthcare, this study highlights how this concept does not fully capture the complex ways experienced physicians perceive, invoke, and redress power in the clinical encounter. Based on the insights, the authors suggest that physicians learn to enact ethical patient-centered therapeutic communication through reflective, effective, and professional use of power in clinical encounters.

  5. An embedded processor for real-time atmoshperic compensation

    NASA Astrophysics Data System (ADS)

    Bodnar, Michael R.; Curt, Petersen F.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.

    2009-05-01

    Imaging over long distances is crucial to a number of defense and security applications, such as homeland security and launch tracking. However, the image quality obtained from current long-range optical systems can be severely degraded by the turbulent atmosphere in the path between the region under observation and the imager. While this obscured image information can be recovered using post-processing techniques, the computational complexity of such approaches has prohibited deployment in real-time scenarios. To overcome this limitation, we have coupled a state-of-the-art atmospheric compensation algorithm, the average-bispectrum speckle method, with a powerful FPGA-based embedded processing board. The end result is a light-weight, lower-power image processing system that improves the quality of long-range imagery in real-time, and uses modular video I/O to provide a flexible interface to most common digital and analog video transport methods. By leveraging the custom, reconfigurable nature of the FPGA, a 20x speed increase over a modern desktop PC was achieved in a form-factor that is compact, low-power, and field-deployable.

  6. Availability Performance Analysis of Thermal Power Plants

    NASA Astrophysics Data System (ADS)

    Bhangu, Navneet Singh; Singh, Rupinder; Pahuja, G. L.

    2018-03-01

    This case study presents the availability evaluation method of thermal power plants for conducting performance analysis in Indian environment. A generic availability model has been proposed for a maintained system (thermal plants) using reliability block diagrams and fault tree analysis. The availability indices have been evaluated under realistic working environment using inclusion exclusion principle. Four year failure database has been used to compute availability for different combinatory of plant capacity, that is, full working state, reduced capacity or failure state. Availability is found to be very less even at full rated capacity (440 MW) which is not acceptable especially in prevailing energy scenario. One of the probable reason for this may be the difference in the age/health of existing thermal power plants which requires special attention of each unit from case to case basis. The maintenance techniques being used are conventional (50 years old) and improper in context of the modern equipment, which further aggravate the problem of low availability. This study highlights procedure for finding critical plants/units/subsystems and helps in deciding preventive maintenance program.

  7. Nuclear plants gain integrated information systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villavicencio-Ramirez, A.; Rodriquez-Alvarez, J.M.

    1994-10-01

    With the objective of simplifying the complex mesh of computing devices employed within nuclear power plants, modern technology and integration techniques are being used to form centralized (but backed up) databases and distributed processing and display networks. Benefits are immediate as a result of the integration and the use of standards. The use of a unique data acquisition and database subsystem optimizes the high costs of engineering, as this task is done only once for the life span of the system. This also contributes towards a uniform user interface and allows for graceful expansion and maintenance. This article features anmore » integrated information system, Sistema Integral de Informacion de Proceso (SIIP). The development of this system enabled the Laguna Verde Nuclear Power plant to fully use the already existing universe of signals and its related engineering during all plant conditions, namely, start up, normal operation, transient analysis, and emergency operation. Integrated systems offer many advantages over segregated systems, and this experience should benefit similar development efforts in other electric power utilities, not only for nuclear but also for other types of generating plants.« less

  8. The Helicopter Antenna Radiation Prediction Code (HARP)

    NASA Technical Reports Server (NTRS)

    Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.

    1990-01-01

    The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.

  9. Multicore Architecture-aware Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srinivasa, Avinash

    Modern high performance systems are becoming increasingly complex and powerful due to advancements in processor and memory architecture. In order to keep up with this increasing complexity, applications have to be augmented with certain capabilities to fully exploit such systems. These may be at the application level, such as static or dynamic adaptations or at the system level, like having strategies in place to override some of the default operating system polices, the main objective being to improve computational performance of the application. The current work proposes two such capabilites with respect to multi-threaded scientific applications, in particular a largemore » scale physics application computing ab-initio nuclear structure. The first involves using a middleware tool to invoke dynamic adaptations in the application, so as to be able to adjust to the changing computational resource availability at run-time. The second involves a strategy for effective placement of data in main memory, to optimize memory access latencies and bandwidth. These capabilties when included were found to have a significant impact on the application performance, resulting in average speedups of as much as two to four times.« less

  10. Phasor Measurement Unit and Its Application in Modern Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Makarov, Yuri V.; Dong, Zhao Yang

    2010-06-01

    The introduction of phasor measuring units (PMUs) in power systems significantly improves the possibilities for monitoring and analyzing power system dynamics. Synchronized measurements make it possible to directly measure phase angles between corresponding phasors in different locations within the power system. Improved monitoring and remedial action capabilities allow network operators to utilize the existing power system in a more efficient way. Improved information allows fast and reliable emergency actions, which reduces the need for relatively high transmission margins required by potential power system disturbances. In this chapter, the applications of PMU in modern power systems are presented. Specifically, the topicsmore » touched in this chapter include state estimation, voltage and transient stability, oscillation monitoring, event and fault detection, situation awareness, and model validation. A case study using Characteristic Ellipsoid method based on PMU to monitor power system dynamic is presented.« less

  11. Power SEMICONDUCTORS—STATE of Art and Future Trends

    NASA Astrophysics Data System (ADS)

    Benda, Vitezslav

    2011-06-01

    The importance of effective energy conversion control, including power generation from renewable and environmentally clean energy sources, increases due to rising energy demand. Power electronic systems for controlling and converting electrical energy have become the workhorse of modern society in many applications, both in industry and at home. Power electronics plays a very important role in traction and can be considered as brawns of robotics and automated manufacturing systems. Power semiconductor devices are the key electronic components used in power electronic systems. Advances in power semiconductor technology have improved the efficiency, size, weight and cost of power electronic systems. At present, IGCTs, IGBTs, and MOSFETs represent modern switching devices. Power integrated circuits (PIC) have been developed for the use of power converters for portable, automotive and aerospace applications. For advanced applications, new materials (SiC and GaN) have been introduced. This paper reviews the state of these devices and elaborates on their potentials in terms of higher voltages, higher power density, and better switching performance.

  12. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.

    2011-07-01

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.

  13. Using parallel computing for the display and simulation of the space debris environment

    NASA Astrophysics Data System (ADS)

    Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter

    Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.

  14. The Computational Ecologist’s Toolbox

    EPA Science Inventory

    Computational ecology, nestled in the broader field of data science, is an interdisciplinary field that attempts to improve our understanding of complex ecological systems through the use of modern computational methods. Computational ecology is based on a union of competence in...

  15. A browser-based event display for the CMS experiment at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hategan, M.; McCauley, T.; Nguyen, P.

    2012-01-01

    The line between native and web applications is becoming increasingly blurred as modern web browsers are becoming powerful platforms on which applications can be run. Such applications are trivial to install and are readily extensible and easy to use. In an educational setting, web applications permit a way to deploy deploy tools in a highly-restrictive computing environment. The I2U2 collaboration has developed a browser-based event display for viewing events in data collected and released to the public by the CMS experiment at the LHC. The application itself reads a JSON event format and uses the JavaScript 3D rendering engine pre3d.more » The only requirement is a modern browser using HTML5 canvas. The event display has been used by thousands of high school students in the context of programs organized by I2U2, QuarkNet, and IPPOG. This browser-based approach to display of events can have broader usage and impact for experts and public alike.« less

  16. Did farming arise from a misapplication of social intelligence?

    PubMed

    Mithen, Steven

    2007-04-29

    The origins of farming is the defining event of human history--the one turning point that has resulted in modern humans having a quite different type of lifestyle and cognition to all other animals and past types of humans. With the economic basis provided by farming, human individuals and societies have developed types of material culture that greatly augment powers of memory and computation, extending the human mental capacity far beyond that which the brain alone can provide. Archaeologists have long debated and discussed why people began living in settled communities and became dependent on cultivated plants and animals, which soon evolved into domesticated forms. One of the most intriguing explanations was proposed more than 20 years ago not by an archaeologist but by a psychologist: Nicholas Humphrey suggested that farming arose from the 'misapplication of social intelligence'. I explore this idea in relation to recent discoveries and archaeological interpretations in the Near East, arguing that social intelligence has indeed played a key role in the origin of farming and hence the emergence of the modern world.

  17. cuSwift --- a suite of numerical integration methods for modelling planetary systems implemented in C/CUDA

    NASA Astrophysics Data System (ADS)

    Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.

    2014-07-01

    Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky effect on resonant asteroids. We present first results and compare them with integrations done with the original algorithms implemented in SWIFT in order to assess the numerical precision of cuSwift and to demonstrate the speed-up we achieved using the GPU.

  18. Application of web-GIS approach for climate change study

    NASA Astrophysics Data System (ADS)

    Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander; Bogomolov, Vasily; Martynova, Yuliya; Shulgina, Tamara

    2013-04-01

    Georeferenced datasets are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated web-GIS information-computational system for analysis of georeferenced climatological and meteorological data has been created. It is based on OGC standards and involves many modern solutions such as object-oriented programming model, modular composition, and JavaScript libraries based on GeoExt library, ExtJS Framework and OpenLayers software. The main advantage of the system lies in a possibility to perform mathematical and statistical data analysis, graphical visualization of results with GIS-functionality, and to prepare binary output files with just only a modern graphical web-browser installed on a common desktop computer connected to Internet. Several geophysical datasets represented by two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, DWD Global Precipitation Climatology Centre's data, GMAO Modern Era-Retrospective analysis for Research and Applications, meteorological observational data for the territory of the former USSR for the 20th century, results of modeling by global and regional climatological models, and others are available for processing by the system. And this list is extending. Also a functionality to run WRF and "Planet simulator" models was implemented in the system. Due to many preset parameters and limited time and spatial ranges set in the system these models have low computational power requirements and could be used in educational workflow for better understanding of basic climatological and meteorological processes. The Web-GIS information-computational system for geophysical data analysis provides specialists involved into multidisciplinary research projects with reliable and practical instruments for complex analysis of climate and ecosystems changes on global and regional scales. Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified web-interface in a common graphical web-browser. This work is partially supported by the Ministry of education and science of the Russian Federation (contract #8345), SB RAS project VIII.80.2.1, RFBR grant #11-05-01190a, and integrated project SB RAS #131.

  19. Understanding the Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-01-14

    The electric power grid has been rightly celebrated as the single most important engineering feat of the 20th century. The grid powers our homes, offices, hospitals, and schools; and, increasingly, it powers our favorite devices from smartphones to HDTVs. With those and other modern innovations and challenges, our grid will need to evolve. Grid modernization efforts will help the grid make full use of today’s advanced technologies and serve our needs in the 21st century. While the vast majority of upgrades are implemented by private sector energy companies that own and operate the grid, DOE has been investing in technologiesmore » that are revolutionizing the way we generate, store and transmit power.« less

  20. Large-scale molecular dynamics simulation of DNA: implementation and validation of the AMBER98 force field in LAMMPS.

    PubMed

    Grindon, Christina; Harris, Sarah; Evans, Tom; Novik, Keir; Coveney, Peter; Laughton, Charles

    2004-07-15

    Molecular modelling played a central role in the discovery of the structure of DNA by Watson and Crick. Today, such modelling is done on computers: the more powerful these computers are, the more detailed and extensive can be the study of the dynamics of such biological macromolecules. To fully harness the power of modern massively parallel computers, however, we need to develop and deploy algorithms which can exploit the structure of such hardware. The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a scalable molecular dynamics code including long-range Coulomb interactions, which has been specifically designed to function efficiently on parallel platforms. Here we describe the implementation of the AMBER98 force field in LAMMPS and its validation for molecular dynamics investigations of DNA structure and flexibility against the benchmark of results obtained with the long-established code AMBER6 (Assisted Model Building with Energy Refinement, version 6). Extended molecular dynamics simulations on the hydrated DNA dodecamer d(CTTTTGCAAAAG)(2), which has previously been the subject of extensive dynamical analysis using AMBER6, show that it is possible to obtain excellent agreement in terms of static, dynamic and thermodynamic parameters between AMBER6 and LAMMPS. In comparison with AMBER6, LAMMPS shows greatly improved scalability in massively parallel environments, opening up the possibility of efficient simulations of order-of-magnitude larger systems and/or for order-of-magnitude greater simulation times.

  1. Active Power Control by Wind Power | Grid Modernization | NREL

    Science.gov Websites

    of conventional generators. How this will affect the system at different wind power penetration levels is not well understood. To gain insight, NREL researchers conducted simulations of different

  2. Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.

    2009-05-01

    We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

  3. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    PubMed

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  4. International Atomic Energy Agency specialists meeting on experience in ageing, maintenance, and modernization of instrumentation and control systems for improving nuclear power plant availability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-10-01

    This report presents the proceedings of the Specialist`s Meeting on Experience in Aging, Maintenance and Modernization of Instrumentation and Control Systems for Improving Nuclear Power Plant Availability that was held at the Ramada Inn in Rockville, Maryland on May 5--7, 1993. The Meeting was presented in cooperation with the Electric Power Research Institute, Oak Ridge National Laboratory and the International Atomic Energy Agency. There were approximately 65 participants from 13 countries at the Meeting. Individual reports have been cataloged separately.

  5. How Computer Graphics Work.

    ERIC Educational Resources Information Center

    Prosise, Jeff

    This document presents the principles behind modern computer graphics without straying into the arcane languages of mathematics and computer science. Illustrations accompany the clear, step-by-step explanations that describe how computers draw pictures. The 22 chapters of the book are organized into 5 sections. "Part 1: Computer Graphics in…

  6. A preliminary study of molecular dynamics on reconfigurable computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolinski, C.; Trouw, F. R.; Gokhale, M.

    2003-01-01

    In this paper we investigate the performance of platform FPGAs on a compute-intensive, floating-point-intensive supercomputing application, Molecular Dynamics (MD). MD is a popular simulation technique to track interacting particles through time by integrating their equations of motion. One part of the MD algorithm was implemented using the Fabric Generator (FG)[l I ] and mapped onto several reconfigurable logic arrays. FG is a Java-based toolset that greatly accelerates construction of the fabrics from an abstract technology independent representation. Our experiments used technology-independent IEEE 32-bit floating point operators so that the design could be easily re-targeted. Experiments were performed using both non-pipelinedmore » and pipelined floating point modules. We present results for the Altera Excalibur ARM System on a Programmable Chip (SoPC), the Altera Strath EPlS80, and the Xilinx Virtex-N Pro 2VP.50. The best results obtained were 5.69 GFlops at 8OMHz(Altera Strath EPlS80), and 4.47 GFlops at 82 MHz (Xilinx Virtex-II Pro 2VF50). Assuming a lOWpower budget, these results compare very favorably to a 4Gjlop/40Wprocessing/power rate for a modern Pentium, suggesting that reconfigurable logic can achieve high performance at low power on jloating-point-intensivea pplications.« less

  7. Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures

    DTIC Science & Technology

    2015-09-01

    soundness or completeness. An incomplete analysis will produce extra edges in the CFG that might allow an attacker to slip through. An unsound analysis...Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures by Isaac Noah Evans Submitted to the Department of Electrical...Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer

  8. Body metaphors--reading the body in contemporary culture.

    PubMed

    Skara, Danica

    2004-01-01

    This paper addresses the linguistic reframing of the human body in contemporary culture. Our aim is to provide a linguistic description of the ways in which the body is represented in modern English language. First, we will try to focus on body metaphors in general. We have collected a sample of 300 words and phrases functioning as body metaphors in modern English language. Reading the symbolism of the body we are witnessing changes in the basic metaphorical structuring of the human body. The results show that new vocabulary binds different fields of knowledge associated with machines and human beings according to a shared textual frame: human as computer and computer as human metaphor. Humans are almost blended with computers and vice versa. This metaphorical use of the human body and its parts reveals not only currents of unconscious though but also the structures of modern society and culture.

  9. Federal Aviation Administration : challenges in modernizing the agency

    DOT National Transportation Integrated Search

    2000-02-01

    FAA's efforts to implement initiatives in five key areas-air traffic control modernization, procurement and personnel reform, aviation safety, aviation and computer security, and financial management-have met with limited success. For example, FAA ha...

  10. Preparing Dairy Technologists

    ERIC Educational Resources Information Center

    Sliva, William R.

    1977-01-01

    The size of modern dairy plant operations has led to extreme specialization in product manufacturing, milk processing, microbiological analysis, chemical and mathematical computations. Morrisville Agricultural and Technical College, New York, has modernized its curricula to meet these changes. (HD)

  11. Power and Authority in Adult Education

    ERIC Educational Resources Information Center

    Alsobaie, Mohammed Fahad

    2015-01-01

    This paper covers power and authority in adult education, focusing on the modern definitions of power and authority in the educational context, then moving into past precedents of the use of power and authority of classrooms. Finally, the optimal types of power and authority to apply to adult education are examined. Power defines a relationship…

  12. Multi-agent coordination algorithms for control of distributed energy resources in smart grids

    NASA Astrophysics Data System (ADS)

    Cortes, Andres

    Sustainable energy is a top-priority for researchers these days, since electricity and transportation are pillars of modern society. Integration of clean energy technologies such as wind, solar, and plug-in electric vehicles (PEVs), is a major engineering challenge in operation and management of power systems. This is due to the uncertain nature of renewable energy technologies and the large amount of extra load that PEVs would add to the power grid. Given the networked structure of a power system, multi-agent control and optimization strategies are natural approaches to address the various problems of interest for the safe and reliable operation of the power grid. The distributed computation in multi-agent algorithms addresses three problems at the same time: i) it allows for the handling of problems with millions of variables that a single processor cannot compute, ii) it allows certain independence and privacy to electricity customers by not requiring any usage information, and iii) it is robust to localized failures in the communication network, being able to solve problems by simply neglecting the failing section of the system. We propose various algorithms to coordinate storage, generation, and demand resources in a power grid using multi-agent computation and decentralized decision making. First, we introduce a hierarchical vehicle-one-grid (V1G) algorithm for coordination of PEVs under usage constraints, where energy only flows from the grid in to the batteries of PEVs. We then present a hierarchical vehicle-to-grid (V2G) algorithm for PEV coordination that takes into consideration line capacity constraints in the distribution grid, and where energy flows both ways, from the grid in to the batteries, and from the batteries to the grid. Next, we develop a greedy-like hierarchical algorithm for management of demand response events with on/off loads. Finally, we introduce distributed algorithms for the optimal control of distributed energy resources, i.e., generation and storage in a microgrid. The algorithms we present are provably correct and tested in simulation. Each algorithm is assumed to work on a particular network topology, and simulation studies are carried out in order to demonstrate their convergence properties to a desired solution.

  13. Looking ahead in systems engineering

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Donald S.

    1966-01-01

    Five areas that are discussed in this paper are: (1) the technological characteristics of systems engineering; (2) the analytical techniques that are giving modern systems work its capability and power; (3) the management, economics, and effectiveness dimensions that now frame the modern systems field; (4) systems engineering's future impact upon automation, computerization and managerial decision-making in industry - and upon aerospace and weapons systems in government and the military; and (5) modern systems engineering's partnership with modern quality control and reliability.

  14. Student Perceived Importance and Correlations of Selected Computer Literacy Course Topics

    ERIC Educational Resources Information Center

    Ciampa, Mark

    2013-01-01

    Traditional college-level courses designed to teach computer literacy are in a state of flux. Today's students have high rates of access to computing technology and computer ownership, leading many policy decision makers to conclude that students already are computer literate and thus computer literacy courses are dinosaurs in a modern digital…

  15. Computational Ecology and Open Science: Tools to Help Manage Lakes for Cyanobacteria in Lakes

    EPA Science Inventory

    Computational ecology is an interdisciplinary field that takes advantage of modern computation abilities to expand our ecological understanding. As computational ecologists, we use large data sets, which often cover large spatial extents, and advanced statistical/mathematical co...

  16. Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition

    NASA Astrophysics Data System (ADS)

    Fitch, W. Tecumseh

    2014-09-01

    Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology.

  17. Toward a computational framework for cognitive biology: unifying approaches from cognitive neuroscience and comparative cognition.

    PubMed

    Fitch, W Tecumseh

    2014-09-01

    Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology. Copyright © 2014. Published by Elsevier B.V.

  18. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  19. Modern Electronic Devices: An Increasingly Common Cause of Skin Disorders in Consumers.

    PubMed

    Corazza, Monica; Minghetti, Sara; Bertoldi, Alberto Maria; Martina, Emanuela; Virgili, Annarosa; Borghi, Alessandro

    2016-01-01

    : The modern conveniences and enjoyment brought about by electronic devices bring with them some health concerns. In particular, personal electronic devices are responsible for rising cases of several skin disorders, including pressure, friction, contact dermatitis, and other physical dermatitis. The universal use of such devices, either for work or recreational purposes, will probably increase the occurrence of polymorphous skin manifestations over time. It is important for clinicians to consider electronics as potential sources of dermatological ailments, for proper patient management. We performed a literature review on skin disorders associated with the personal use of modern technology, including personal computers and laptops, personal computer accessories, mobile phones, tablets, video games, and consoles.

  20. Modernization of the Control Systems of High-Frequency, Brush-Free, and Collector Exciters of Turbogenerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popov, E. N., E-mail: enpo@ruselmash.ru; Komkov, A. L.; Ivanov, S. L.

    Methods of modernizing the regulation systems of electric machinery exciters with high-frequency, brush-free, and collector exciters by means of microprocessor technology are examined. The main problems of modernization are to increase the response speed of a system and to use a system stabilizer to increase the stability of the power system.

  1. Human Science for Human Freedom? Piaget's Developmental Research and Foucault's Ethical Truth Games

    ERIC Educational Resources Information Center

    Zhao, Guoping

    2012-01-01

    The construction of the modern subject and the pursuit of human freedom and autonomy, as well as the practice of human science has been pivotal in the development of modern education. But for Foucault, the subject is only the effect of discourses and power-knowledge arrangements, and modern human science is part of the very arrangement that has…

  2. Vids: Version 2.0 Alpha Visualization Engine

    DTIC Science & Technology

    2018-04-25

    fidelity than existing efforts. Vids is a project aimed at producing more dynamic and interactive visualization tools using modern computer game ...move through and interact with the data to improve informational understanding. The Vids software leverages off-the-shelf modern game development...analysis and correlations. Recently, an ARL-pioneered project named Virtual Reality Data Analysis Environment (VRDAE) used VR and a modern game engine

  3. The efficiency of night insulation using aerogel-filled polycarbonate panels during the heating season

    NASA Astrophysics Data System (ADS)

    Adelsberger, Kathleen

    Energy is the basis for modern life. All modern technology from a simple coffee maker to massive industrial facilities is powered by energy. While the demand for energy is increasing, our planet is suffering from the consequences of using fossil fuels to generate electricity. Therefore, the world is looking at clean energy and solar power to minimize this effect on our environment. However, saving energy is extremely important even for clean energy. The more we save the less we have to generate. Heat retention in buildings is one step towards achieving passive heating. Therefore, efforts are made to prevent heat from escaping buildings through the glass during cold nights. Movable insulation is a way to increase the insulation value of the glass to reduce heat loss towards the outdoor. This thesis examines the performance of the aerogel-filled polycarbonate movable panels in the Ecohawks building, a building located on the west campus of The University of Kansas. Onsite tests were performed using air and surface temperature sensors to determine the effectiveness of the system. Computer simulations were run by Therm 7.2 simulation software to explore alternative design options. A cost analysis was also performed to evaluate the feasibility of utilizing movable insulation to reduce the heating bills during winter. Results showed that sealed movable insulation reduces heat loss through the glazing by 67.5%. Replacing aerogel with XPS panels reduces this percentage to 64.3%. However, it reduces the cost of the insulation material by 98%.

  4. The new classic data acquisition system for NPOI

    NASA Astrophysics Data System (ADS)

    Sun, B.; Jorgensen, A. M.; Landavazo, M.; Hutter, D. J.; van Belle, G. T.; Mozurkewich, David; Armstrong, J. T.; Schmitt, H. R.; Baines, E. K.; Restaino, S. R.

    2014-07-01

    The New Classic data acquisition system is an important portion of a new project of stellar surface imaging with the NPOI, funded by the National Science Foundation, and enables the data acquisition necessary for the project. The NPOI can simultaneously deliver beams from 6 telescopes to the beam combining facility, and in the Classic beam combiner these are combined 4 at a time on 3 separate spectrographs with all 15 possible baselines observed. The Classic data acquisition system is limited to 16 of 32 wavelength channels on two spectrographs and limited to 30 s integrations followed by a pause to ush data. Classic also has some limitations in its fringe-tracking capability. These factors, and the fact that Classic incorporates 1990s technology which cannot be easily replaced are motivation for upgrading the data acquisition system. The New Classic data acquisition system is based around modern electronics, including a high-end Stratix FPGA, a 200 MB/s Direct Memory Access card, and a fast modern Linux computer. These allow for continuous recording of all 96 channels across three spectrographs, increasing the total amount of data recorded by a an estimated order of magnitude. The additional computing power on the data acquisition system also allows for the implementation of more sophisticated fringe-tracking algorithms which are needed for the Stellar Surface Imaging project. In this paper we describe the New Classic system design and implementation, describe the background and motivation for the system as well as show some initial results from using it.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Germain, Shawn

    Nuclear Power Plant (NPP) refueling outages create some of the most challenging activities the utilities face in both tracking and coordinating thousands of activities in a short period of time. Other challenges, including nuclear safety concerns arising from atypical system configurations and resource allocation issues, can create delays and schedule overruns, driving up outage costs. Today the majority of the outage communication is done using processes that do not take advantage of advances in modern technologies that enable enhanced communication, collaboration and information sharing. Some of the common practices include: runners that deliver paper-based requests for approval, radios, telephones, desktopmore » computers, daily schedule printouts, and static whiteboards that are used to display information. Many gains have been made to reduce the challenges facing outage coordinators; however; new opportunities can be realized by utilizing modern technological advancements in communication and information tools that can enhance the collective situational awareness of plant personnel leading to improved decision-making. Ongoing research as part of the Light Water Reactor Sustainability Program (LWRS) has been targeting NPP outage improvement. As part of this research, various applications of collaborative software have been demonstrated through pilot project utility partnerships. Collaboration software can be utilized as part of the larger concept of Computer-Supported Cooperative Work (CSCW). Collaborative software can be used for emergent issue resolution, Outage Control Center (OCC) displays, and schedule monitoring. Use of collaboration software enables outage staff and subject matter experts (SMEs) to view and update critical outage information from any location on site or off.« less

  6. Economic impact of fuel properties on turbine powered business aircraft

    NASA Technical Reports Server (NTRS)

    Powell, F. D.

    1984-01-01

    The principal objective was to estimate the economic impact on the turbine-powered business aviation fleet of potential changes in the composition and properties of aviation fuel. Secondary objectives include estimation of the sensitivity of costs to specific fuel properties, and an assessment of the directions in which further research should be directed. The study was based on the published characteristics of typical and specific modern aircraft in three classes; heavy jet, light jet, and turboprop. Missions of these aircraft were simulated by computer methods for each aircraft for several range and payload combinations, and assumed atmospheric temperatures ranging from nominal to extremely cold. Five fuels were selected for comparison with the reference fuel, nominal Jet A. An overview of the data, the mathematic models, the data reduction and analysis procedure, and the results of the study are given. The direct operating costs of the study fuels are compared with that of the reference fuel in the 1990 time-frame, and the anticipated fleet costs and fuel break-even costs are estimated.

  7. Smartphone home monitoring of ECG

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Moon, Gyu; Landa, Joseph; Nakajima, Hiroshi; Hata, Yutaka

    2012-06-01

    A system of ambulatory, halter, electrocardiography (ECG) monitoring system has already been commercially available for recording and transmitting heartbeats data by the Internet. However, it enjoys the confidence with a reservation and thus a limited market penetration, our system was targeting at aging global villagers having an increasingly biomedical wellness (BMW) homecare needs, not hospital related BMI (biomedical illness). It was designed within SWaP-C (Size, Weight, and Power, Cost) using 3 innovative modules: (i) Smart Electrode (lowpower mixed signal embedded with modern compressive sensing and nanotechnology to improve the electrodes' contact impedance); (ii) Learnable Database (in terms of adaptive wavelets transform QRST feature extraction, Sequential Query Relational database allowing home care monitoring retrievable Aided Target Recognition); (iii) Smartphone (touch screen interface, powerful computation capability, caretaker reporting with GPI, ID, and patient panic button for programmable emergence procedure). It can provide a supplementary home screening system for the post or the pre-diagnosis care at home with a build-in database searchable with the time, the place, and the degree of urgency happened, using in-situ screening.

  8. Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.

    PubMed

    Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S

    2017-01-01

    Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.

  9. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  10. Discovering H-bonding rules in crystals with inductive logic programming.

    PubMed

    Ando, Howard Y; Dehaspe, Luc; Luyten, Walter; Van Craenenbroeck, Elke; Vandecasteele, Henk; Van Meervelt, Luc

    2006-01-01

    In the domain of crystal engineering, various schemes have been proposed for the classification of hydrogen bonding (H-bonding) patterns observed in 3D crystal structures. In this study, the aim is to complement these schemes with rules that predict H-bonding in crystals from 2D structural information only. Modern computational power and the advances in inductive logic programming (ILP) can now provide computational chemistry with the opportunity for extracting structure-specific rules from large databases that can be incorporated into expert systems. ILP technology is here applied to H-bonding in crystals to develop a self-extracting expert system utilizing data in the Cambridge Structural Database of small molecule crystal structures. A clear increase in performance was observed when the ILP system DMax was allowed to refer to the local structural environment of the possible H-bond donor/acceptor pairs. This ability distinguishes ILP from more traditional approaches that build rules on the basis of global molecular properties.

  11. Volume and Value of Big Healthcare Data.

    PubMed

    Dinov, Ivo D

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions.

  12. Neuropeptide Signaling Networks and Brain Circuit Plasticity.

    PubMed

    McClard, Cynthia K; Arenkiel, Benjamin R

    2018-01-01

    The brain is a remarkable network of circuits dedicated to sensory integration, perception, and response. The computational power of the brain is estimated to dwarf that of most modern supercomputers, but perhaps its most fascinating capability is to structurally refine itself in response to experience. In the language of computers, the brain is loaded with programs that encode when and how to alter its own hardware. This programmed "plasticity" is a critical mechanism by which the brain shapes behavior to adapt to changing environments. The expansive array of molecular commands that help execute this programming is beginning to emerge. Notably, several neuropeptide transmitters, previously best characterized for their roles in hypothalamic endocrine regulation, have increasingly been recognized for mediating activity-dependent refinement of local brain circuits. Here, we discuss recent discoveries that reveal how local signaling by corticotropin-releasing hormone reshapes mouse olfactory bulb circuits in response to activity and further explore how other local neuropeptide networks may function toward similar ends.

  13. Single-electron random-number generator (RNG) for highly secure ubiquitous computing applications

    NASA Astrophysics Data System (ADS)

    Uchida, Ken; Tanamoto, Tetsufumi; Fujita, Shinobu

    2007-11-01

    Since the security of all modern cryptographic techniques relies on unpredictable and irreproducible digital keys generated by random-number generators (RNGs), the realization of high-quality RNG is essential for secure communications. In this report, a new RNG, which utilizes single-electron phenomena, is proposed. A room-temperature operating silicon single-electron transistor (SET) having nearby an electron pocket is used as a high-quality, ultra-small RNG. In the proposed RNG, stochastic single-electron capture/emission processes to/from the electron pocket are detected with high sensitivity by the SET, and result in giant random telegraphic signals (GRTS) on the SET current. It is experimentally demonstrated that the single-electron RNG generates extremely high-quality random digital sequences at room temperature, in spite of its simple configuration. Because of its small-size and low-power properties, the single-electron RNG is promising as a key nanoelectronic device for future ubiquitous computing systems with highly secure mobile communication capabilities.

  14. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  15. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.

    PubMed

    Karthikeyan, M; Raja, T Sree Ranga

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.

  16. Mass Storage and Retrieval at Rome Laboratory

    NASA Technical Reports Server (NTRS)

    Kann, Joshua L.; Canfield, Brady W.; Jamberdino, Albert A.; Clarke, Bernard J.; Daniszewski, Ed; Sunada, Gary

    1996-01-01

    As the speed and power of modern digital computers continues to advance, the demands on secondary mass storage systems grow. In many cases, the limitations of existing mass storage reduce the overall effectiveness of the computing system. Image storage and retrieval is one important area where improved storage technologies are required. Three dimensional optical memories offer the advantage of large data density, on the order of 1 Tb/cm(exp 3), and faster transfer rates because of the parallel nature of optical recording. Such a system allows for the storage of multiple-Gbit sized images, which can be recorded and accessed at reasonable rates. Rome Laboratory is currently investigating several techniques to perform three-dimensional optical storage including holographic recording, two-photon recording, persistent spectral-hole burning, multi-wavelength DNA recording, and the use of bacteriorhodopsin as a recording material. In this paper, the current status of each of these on-going efforts is discussed. In particular, the potential payoffs as well as possible limitations are addressed.

  17. TauFactor: An open-source application for calculating tortuosity factors from tomographic data

    NASA Astrophysics Data System (ADS)

    Cooper, S. J.; Bertei, A.; Shearing, P. R.; Kilner, J. A.; Brandon, N. P.

    TauFactor is a MatLab application for efficiently calculating the tortuosity factor, as well as volume fractions, surface areas and triple phase boundary densities, from image based microstructural data. The tortuosity factor quantifies the apparent decrease in diffusive transport resulting from convolutions of the flow paths through porous media. TauFactor was originally developed to improve the understanding of electrode microstructures for batteries and fuel cells; however, the tortuosity factor has been of interest to a wide range of disciplines for over a century, including geoscience, biology and optics. It is still common practice to use correlations, such as that developed by Bruggeman, to approximate the tortuosity factor, but in recent years the increasing availability of 3D imaging techniques has spurred interest in calculating this quantity more directly. This tool provides a fast and accurate computational platform applicable to the big datasets (>108 voxels) typical of modern tomography, without requiring high computational power.

  18. Design and implementation of a cloud based lithography illumination pupil processing application

    NASA Astrophysics Data System (ADS)

    Zhang, Youbao; Ma, Xinghua; Zhu, Jing; Zhang, Fang; Huang, Huijie

    2017-02-01

    Pupil parameters are important parameters to evaluate the quality of lithography illumination system. In this paper, a cloud based full-featured pupil processing application is implemented. A web browser is used for the UI (User Interface), the websocket protocol and JSON format are used for the communication between the client and the server, and the computing part is implemented in the server side, where the application integrated a variety of high quality professional libraries, such as image processing libraries libvips and ImageMagic, automatic reporting system latex, etc., to support the program. The cloud based framework takes advantage of server's superior computing power and rich software collections, and the program could run anywhere there is a modern browser due to its web UI design. Compared to the traditional way of software operation model: purchased, licensed, shipped, downloaded, installed, maintained, and upgraded, the new cloud based approach, which is no installation, easy to use and maintenance, opens up a new way. Cloud based application probably is the future of the software development.

  19. Logic gates realized by nonvolatile GeTe/Sb2Te3 super lattice phase-change memory with a magnetic field input

    NASA Astrophysics Data System (ADS)

    Lu, Bin; Cheng, Xiaomin; Feng, Jinlong; Guan, Xiawei; Miao, Xiangshui

    2016-07-01

    Nonvolatile memory devices or circuits that can implement both storage and calculation are a crucial requirement for the efficiency improvement of modern computer. In this work, we realize logic functions by using [GeTe/Sb2Te3]n super lattice phase change memory (PCM) cell in which higher threshold voltage is needed for phase change with a magnetic field applied. First, the [GeTe/Sb2Te3]n super lattice cells were fabricated and the R-V curve was measured. Then we designed the logic circuits with the super lattice PCM cell verified by HSPICE simulation and experiments. Seven basic logic functions are first demonstrated in this letter; then several multi-input logic gates are presented. The proposed logic devices offer the advantages of simple structures and low power consumption, indicating that the super lattice PCM has the potential in the future nonvolatile central processing unit design, facilitating the development of massive parallel computing architecture.

  20. Volume and Value of Big Healthcare Data

    PubMed Central

    Dinov, Ivo D.

    2016-01-01

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions. PMID:26998309

  1. dREL: a relational expression language for dictionary methods.

    PubMed

    Spadaccini, Nick; Castleden, Ian R; du Boulay, Doug; Hall, Sydney R

    2012-08-27

    The provision of precise metadata is an important but a largely underrated challenge for modern science [Nature 2009, 461, 145]. We describe here a dictionary methods language dREL that has been designed to enable complex data relationships to be expressed as formulaic scripts in data dictionaries written in DDLm [Spadaccini and Hall J. Chem. Inf. Model.2012 doi:10.1021/ci300075z]. dREL describes data relationships in a simple but powerful canonical form that is easy to read and understand and can be executed computationally to evaluate or validate data. The execution of dREL expressions is not a substitute for traditional scientific computation; it is to provide precise data dependency information to domain-specific definitions and a means for cross-validating data. Some scientific fields apply conventional programming languages to methods scripts but these tend to inhibit both dictionary development and accessibility. dREL removes the programming barrier and encourages the production of the metadata needed for seamless data archiving and exchange in science.

  2. Analogy of transistor function with modulating photonic band gap in electromagnetically induced grating

    PubMed Central

    Wang, Zhiguo; Ullah, Zakir; Gao, Mengqin; Zhang, Dan; Zhang, Yiqi; Gao, Hong; Zhang, Yanpeng

    2015-01-01

    Optical transistor is a device used to amplify and switch optical signals. Many researchers focus on replacing current computer components with optical equivalents, resulting in an optical digital computer system processing binary data. Electronic transistor is the fundamental building block of modern electronic devices. To replace electronic components with optical ones, an equivalent optical transistor is required. Here we compare the behavior of an optical transistor with the reflection from a photonic band gap structure in an electromagnetically induced transparency medium. A control signal is used to modulate the photonic band gap structure. Power variation of the control signal is used to provide an analogy between the reflection behavior caused by modulating the photonic band gap structure and the shifting of Q-point (Operation point) as well as amplification function of optical transistor. By means of the control signal, the switching function of optical transistor has also been realized. Such experimental schemes could have potential applications in making optical diode and optical transistor used in quantum information processing. PMID:26349444

  3. Analogy of transistor function with modulating photonic band gap in electromagnetically induced grating

    NASA Astrophysics Data System (ADS)

    Wang, Zhiguo; Ullah, Zakir; Gao, Mengqin; Zhang, Dan; Zhang, Yiqi; Gao, Hong; Zhang, Yanpeng

    2015-09-01

    Optical transistor is a device used to amplify and switch optical signals. Many researchers focus on replacing current computer components with optical equivalents, resulting in an optical digital computer system processing binary data. Electronic transistor is the fundamental building block of modern electronic devices. To replace electronic components with optical ones, an equivalent optical transistor is required. Here we compare the behavior of an optical transistor with the reflection from a photonic band gap structure in an electromagnetically induced transparency medium. A control signal is used to modulate the photonic band gap structure. Power variation of the control signal is used to provide an analogy between the reflection behavior caused by modulating the photonic band gap structure and the shifting of Q-point (Operation point) as well as amplification function of optical transistor. By means of the control signal, the switching function of optical transistor has also been realized. Such experimental schemes could have potential applications in making optical diode and optical transistor used in quantum information processing.

  4. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  5. Power Systems Operations and Controls | Grid Modernization | NREL

    Science.gov Websites

    controlled electric grid-with one-way delivery of power from central-station power plants-into one that Manager, Energy Systems Optimization and Control Group murali.baggu@nrel.gov | 303-275-4337

  6. Grid Integration Science, NREL Power Systems Engineering Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin

    This report highlights journal articles published in 2016 by researchers in the Power Systems Engineering Center. NREL's Power Systems Engineering Center published 47 journal and magazine articles in the past year, highlighting recent research in grid modernization.

  7. Identification of key ancestors of modern germplasm in a breeding program of maize.

    PubMed

    Technow, F; Schrag, T A; Schipprack, W; Melchinger, A E

    2014-12-01

    Probabilities of gene origin computed from the genomic kinships matrix can accurately identify key ancestors of modern germplasms Identifying the key ancestors of modern plant breeding populations can provide valuable insights into the history of a breeding program and provide reference genomes for next generation whole genome sequencing. In an animal breeding context, a method was developed that employs probabilities of gene origin, computed from the pedigree-based additive kinship matrix, for identifying key ancestors. Because reliable and complete pedigree information is often not available in plant breeding, we replaced the additive kinship matrix with the genomic kinship matrix. As a proof-of-concept, we applied this approach to simulated data sets with known ancestries. The relative contribution of the ancestral lines to later generations could be determined with high accuracy, with and without selection. Our method was subsequently used for identifying the key ancestors of the modern Dent germplasm of the public maize breeding program of the University of Hohenheim. We found that the modern germplasm can be traced back to six or seven key ancestors, with one or two of them having a disproportionately large contribution. These results largely corroborated conjectures based on early records of the breeding program. We conclude that probabilities of gene origin computed from the genomic kinships matrix can be used for identifying key ancestors in breeding programs and estimating the proportion of genes contributed by them.

  8. [Side Effects of Modernity : Dam Building, Health Care, and the Construction of Power in the Context of the Control of Schistosomiasis in Egypt in the 1960s and early 1970s].

    PubMed

    Brendel, Benjamin

    2017-09-01

    This article analyzes the modernization campaigns in Egypt in the 1960s and early 1970s. The regulation of the Nile by the Aswan High Dam and the resulting irrigation projects caused the rate of schistosomiasis infestation in the population to rise. The result was a discourse between experts from the global north and Egyptian elites about modernization, development aid, dam building and health care. The fight against schistosomiasis was like a cipher, which combined different power-laden concepts and arguments. This article will decode the cipher and allow a deeper look into the contemporary dimensions of power bound to this subject. The text is conceived around three thematic axes. The first deals with the discursive interplay of modernization, health and development aid in and for Egypt. The second focuses on far-reaching and long-standing arguments within an international expert discourse about these concepts. Finally, the third presents an exemplary case study of West German health and development aid for fighting schistosomiasis in the Egyptian Fayoum oasis.

  9. Respectful Modeling: Addressing Uncertainty in Dynamic System Models for Molecular Biology.

    PubMed

    Tsigkinopoulou, Areti; Baker, Syed Murtuza; Breitling, Rainer

    2017-06-01

    Although there is still some skepticism in the biological community regarding the value and significance of quantitative computational modeling, important steps are continually being taken to enhance its accessibility and predictive power. We view these developments as essential components of an emerging 'respectful modeling' framework which has two key aims: (i) respecting the models themselves and facilitating the reproduction and update of modeling results by other scientists, and (ii) respecting the predictions of the models and rigorously quantifying the confidence associated with the modeling results. This respectful attitude will guide the design of higher-quality models and facilitate the use of models in modern applications such as engineering and manipulating microbial metabolism by synthetic biology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Economic analysis of wind-powered farmhouse and farm building heating systems

    NASA Astrophysics Data System (ADS)

    Stafford, R. W.; Greeb, F. J.; Smith, M. H.; Deschenes, C.; Weaver, N. L.

    1981-01-01

    The break even values of wind energy for selected farmhouses and farm buildings focusing on the effects of thermal storage on the use of WECS production were evaluated. Farmhouse structural models include three types derived from a national survey: an older, a more modern, and a passive solar structure. The eight farm building applications include: (1) poultry layers; (2) poultry brooding/layers; (3) poultry broilers; (4) poultry turkeys; (5) swine farrowing; (6) swine growing/finishing; (7) dairy; and (8) lambing. The farm buildings represent the spectrum of animal types, heating energy use, and major contributions to national agricultural economic values. All energy analyses are based on hour by hour computations which allow for growth of animals, sensible and latent heat production, and ventilation requirements.

  11. Two Years Later, Greensburg is Officially Green - with NREL's Help | News

    Science.gov Websites

    Question - what will become of Greensburg? Photo of a modern looking rectangular-shaped building with glass meet the community's goal of being powered entirely by renewable sources. Drawing of a modern-looking

  12. Grid-Tied Photovoltaic Power System

    NASA Technical Reports Server (NTRS)

    Eichenberg, Dennis J.

    2011-01-01

    A grid-tied photovoltaic (PV) power system is connected directly to the utility distribution grid. Facility power can be obtained from the utility system as normal. The PV system is synchronized with the utility system to provide power for the facility, and excess power is provided to the utility. Operating costs of a PV power system are low compared to conventional power technologies. This method can displace the highest-cost electricity during times of peak demand in most climatic regions, and thus reduce grid loading. Net metering is often used, in which independent power producers such as PV power systems are connected to the utility grid via the customers main service panels and meters. When the PV power system is generating more power than required at that location, the excess power is provided to the utility grid. The customer pays the net of the power purchased when the on-site power demand is greater than the onsite power production, and the excess power is returned to the utility grid. Power generated by the PV system reduces utility demand, and the surplus power aids the community. Modern PV panels are readily available, reliable, efficient, and economical, with a life expectancy of at least 25 years. Modern electronics have been the enabling technology behind grid-tied power systems, making them safe, reliable, efficient, and economical with a life expectancy equal to the modern PV panels. The grid-tied PV power system was successfully designed and developed, and this served to validate the basic principles developed, and the theoretical work that was performed. Grid-tied PV power systems are reliable, maintenance- free, long-life power systems, and are of significant value to NASA and the community. Of particular value are the analytical tools and capabilities that have been successfully developed. Performance predictions can be made confidently for grid-tied PV systems of various scales. The work was done under the NASA Hybrid Power Management (HPM) Program, which is the integration of diverse power devices in an optimal configuration for space and terrestrial applications.

  13. Laboratory Mathematics

    ERIC Educational Resources Information Center

    de Mestre, Neville

    2004-01-01

    Computers were invented to help mathematicians perform long and complicated calculations more efficiently. By the time that a computing area became a familiar space in primary and secondary schools, the initial motivation for computer use had been submerged in the many other functions that modern computers now accomplish. Not only the mathematics…

  14. Computing Legacy Software Behavior to Understand Functionality and Security Properties: An IBM/370 Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J

    Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and securitymore » vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.« less

  15. Measuring Power Flow in Electric Vehicles

    NASA Technical Reports Server (NTRS)

    Griffin, Daniel C., Jr; Wiker, G. A.

    1983-01-01

    Instrument accommodates fast rise and fall times of waveforms characteristic of modern, efficient power controllers. Power meter multiplies analog signals proportional to voltage and current, and converts resulting signal to frequency. Two mechanical counters provided: one for charging, one for discharging.

  16. Plancton: an opportunistic distributed computing project based on Docker containers

    NASA Astrophysics Data System (ADS)

    Concas, Matteo; Berzano, Dario; Bagnasco, Stefano; Lusso, Stefano; Masera, Massimo; Puccio, Maximiliano; Vallero, Sara

    2017-10-01

    The computing power of most modern commodity computers is far from being fully exploited by standard usage patterns. In this work we describe the development and setup of a virtual computing cluster based on Docker containers used as worker nodes. The facility is based on Plancton: a lightweight fire-and-forget background service. Plancton spawns and controls a local pool of Docker containers on a host with free resources, by constantly monitoring its CPU utilisation. It is designed to release the resources allocated opportunistically, whenever another demanding task is run by the host user, according to configurable policies. This is attained by killing a number of running containers. One of the advantages of a thin virtualization layer such as Linux containers is that they can be started almost instantly upon request. We will show how fast the start-up and disposal of containers eventually enables us to implement an opportunistic cluster based on Plancton daemons without a central control node, where the spawned Docker containers behave as job pilots. Finally, we will show how Plancton was configured to run up to 10 000 concurrent opportunistic jobs on the ALICE High-Level Trigger facility, by giving a considerable advantage in terms of management compared to virtual machines.

  17. Turbine generator evaluation for the Eesti-Energia Estonia and Baltic power plants. Export trade information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-12-01

    The report evaluates the feasibility of 200 MW turbines and generators modernization in two Estonian power plants in order to improve performance and/or availability. This is Volume 1 and it includes the following: (1) scope; (2) evaluation approach; (3) summary of major recommendations; (4) performance tests descriptions; (5) current technology -- component description; (6) recommended studies; (7) recommendations; (8) district heating; (9) description of turbine K-200-130; (10) turbine evaluation results; (11) generator; (12) estimation of modernization costs.

  18. Percolation Theory and Modern Hydraulic Fracturing

    NASA Astrophysics Data System (ADS)

    Norris, J. Q.; Turcotte, D. L.; Rundle, J. B.

    2015-12-01

    During the past few years, we have been developing a percolation model for fracking. This model provides a powerful tool for understanding the growth and properties of the complex fracture networks generated during a modern high volume hydraulic fracture stimulations of tight shale reservoirs. The model can also be used to understand the interaction between the growing fracture network and natural reservoir features such as joint sets and faults. Additionally, the model produces a power-law distribution of bursts which can easily be compared to observed microseismicity.

  19. Parallel algorithms for large-scale biological sequence alignment on Xeon-Phi based clusters.

    PubMed

    Lan, Haidong; Chan, Yuandong; Xu, Kai; Schmidt, Bertil; Peng, Shaoliang; Liu, Weiguo

    2016-07-19

    Computing alignments between two or more sequences are common operations frequently performed in computational molecular biology. The continuing growth of biological sequence databases establishes the need for their efficient parallel implementation on modern accelerators. This paper presents new approaches to high performance biological sequence database scanning with the Smith-Waterman algorithm and the first stage of progressive multiple sequence alignment based on the ClustalW heuristic on a Xeon Phi-based compute cluster. Our approach uses a three-level parallelization scheme to take full advantage of the compute power available on this type of architecture; i.e. cluster-level data parallelism, thread-level coarse-grained parallelism, and vector-level fine-grained parallelism. Furthermore, we re-organize the sequence datasets and use Xeon Phi shuffle operations to improve I/O efficiency. Evaluations show that our method achieves a peak overall performance up to 220 GCUPS for scanning real protein sequence databanks on a single node consisting of two Intel E5-2620 CPUs and two Intel Xeon Phi 7110P cards. It also exhibits good scalability in terms of sequence length and size, and number of compute nodes for both database scanning and multiple sequence alignment. Furthermore, the achieved performance is highly competitive in comparison to optimized Xeon Phi and GPU implementations. Our implementation is available at https://github.com/turbo0628/LSDBS-mpi .

  20. Informatics in dental education: a horizon of opportunity.

    PubMed

    Abbey, L M

    1989-11-01

    Computers have presented society with the largest array of opportunities since the printing press. More specifically in dental education they represent the path to freedom from the memory-based curriculum. Computers allow us to be constantly in touch with the entire scope of knowledge necessary for decision making in every aspect of the process of preparing young men and women to practice dentistry. No longer is it necessary to spend the energy or time previously used to memorize facts, test for retention of facts or be concerned with remembering facts when dealing with our patients. Modern information management systems can assume that task allowing dentists to concentrate on understanding, skill, judgement and wisdom while helping patients deal with their problems within a health care system that is simultaneously baffling in its complexity and overflowing with options. This paper presents a summary of the choices facing dental educators as computers continue to afford us the freedom to look differently at teaching, research and practice. The discussion will elaborate some of the ways dental educators must think differently about the educational process in order to utilize fully the power of computers in curriculum development and tracking, integration of basic and clinical teaching, problem solving, patient management, record keeping and research. Some alternative strategies will be discussed that may facilitate the transition from the memory-based to the computer-based curriculum and practice.

  1. Taking advantage of modern turbines

    NASA Astrophysics Data System (ADS)

    Thresher, Robert

    2018-06-01

    Wind facilities have generally deployed turbines of the same power and height in regular uniform arrays. Now, the modern generation of turbines, with customer-selectable tower heights and larger rotors, can significantly increase wind energy's economic potential using less land to generate cheaper electricity.

  2. Power Market Design | Grid Modernization | NREL

    Science.gov Websites

    Power Market Design Power Market Design NREL researchers are developing a modeling platform to test (a commercial electricity production simulation model) and FESTIV (the NREL-developed Flexible Energy consisting of researchers in power systems and economics Projects Grid Market Design Project The objective of

  3. STORM WATER MANAGEMENT MODEL (SWMM) MODERNIZATION

    EPA Science Inventory

    The U.S. Environmental Protection Agency's Water Supply and Water Resources Division in partnership with the consulting firm of CDM to redevelop and modernize the Storm Water Management Model (SWMM). In the initial phase of this project EPA rewrote SWMM's computational engine usi...

  4. A History of Computer Numerical Control.

    ERIC Educational Resources Information Center

    Haggen, Gilbert L.

    Computer numerical control (CNC) has evolved from the first significant counting method--the abacus. Babbage had perhaps the greatest impact on the development of modern day computers with his analytical engine. Hollerith's functioning machine with punched cards was used in tabulating the 1890 U.S. Census. In order for computers to become a…

  5. Publications | Grid Modernization | NREL

    Science.gov Websites

    Photovoltaics: Trajectories and Challenges Cover of Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow publication Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow

  6. Limits on efficient computation in the physical world

    NASA Astrophysics Data System (ADS)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure/Shor separator"---a criterion that separates the already-verified quantum states from those that appear in Shor's factoring algorithm. I argue that such a separator should be based on a complexity classification of quantum states, and go on to create such a classification. Next I ask what happens to the quantum computing model if we take into account that the speed of light is finite---and in particular, whether Grover's algorithm still yields a quadratic speedup for searching a database. Refuting a claim by Benioff, I show that the surprising answer is yes. Finally, I analyze hypothetical models of computation that go even beyond quantum computing. I show that many such models would be as powerful as the complexity class PP, and use this fact to give a simple, quantum computing based proof that PP is closed under intersection. On the other hand, I also present one model---wherein we could sample the entire history of a hidden variable---that appears to be more powerful than standard quantum computing, but only slightly so.

  7. Isolated step-down DC -DC converter for electric vehicles

    NASA Astrophysics Data System (ADS)

    Kukovinets, O. V.; Sidorov, K. M.; Yutt, V. E.

    2018-02-01

    Modern motor-vehicle industrial sector is moving rapidly now towards the electricity-driving cars production, improving their range and efficiency of components, and in particular the step-down DC/DC converter to supply the onboard circuit 12/24V of electric vehicle from the high-voltage battery. The purpose of this article - to identify the best circuitry topology to design an advanced step-down DC/DC converters with the smallest mass, volume, highest efficiency and power. And this will have a positive effect on driving distance of electric vehicle (EV). On the basis of computational research of existing and implemented circuit topologies of step-down DC/DC converters (serial resonant converter, full bridge with phase-shifting converter, LLC resonant converter) a comprehensive analysis was carried out on the following characteristics: specific volume, specific weight, power, efficiency. The data obtained was the basis for the best technical option - LLC resonant converter. The results can serve as a guide material in the process of components design of the traction equipment for electric vehicles, providing for the best technical solutions in the design and manufacturing of converting equipment, self-contained power supply systems and advanced driver assistance systems.

  8. Cots Correlator Platform

    NASA Astrophysics Data System (ADS)

    Schaaf, Kjeld; Overeem, Ruud

    2004-06-01

    Moore’s law is best exploited by using consumer market hardware. In particular, the gaming industry pushes the limit of processor performance thus reducing the cost per raw flop even faster than Moore’s law predicts. Next to the cost benefits of Common-Of-The-Shelf (COTS) processing resources, there is a rapidly growing experience pool in cluster based processing. The typical Beowulf cluster of PC’s supercomputers are well known. Multiple examples exists of specialised cluster computers based on more advanced server nodes or even gaming stations. All these cluster machines build upon the same knowledge about cluster software management, scheduling, middleware libraries and mathematical libraries. In this study, we have integrated COTS processing resources and cluster nodes into a very high performance processing platform suitable for streaming data applications, in particular to implement a correlator. The required processing power for the correlator in modern radio telescopes is in the range of the larger supercomputers, which motivates the usage of supercomputer technology. Raw processing power is provided by graphical processors and is combined with an Infiniband host bus adapter with integrated data stream handling logic. With this processing platform a scalable correlator can be built with continuously growing processing power at consumer market prices.

  9. Scientists and artists: ""Hey! You got art in my science! You got science on my art

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elfman, Mary E; Hayes, Birchard P; Michel, Kelly D

    The pairing of science and art has proven to be a powerful combination since the Renaissance. The combination of these two seemingly disparate disciplines ensured that even complex scientific theories could be explored and effectively communicated to both the subject matter expert and the layman. In modern times, science and art have frequently been considered disjoint, with objectives, philosophies, and perspectives often in direct opposition to each other. However, given the technological advances in computer science and high fidelity 3-D graphics development tools, this marriage of art and science is once again logically complimentary. Art, in the form of computermore » graphics and animation created on supercomputers, has already proven to be a powerful tool for improving scientific research and providing insight into nuclear phenomena. This paper discusses the power of pairing artists with scientists and engineers in order to pursue the possibilities of a widely accessible lightweight, interactive approach. We will use a discussion of photo-realism versus stylization to illuminate the expected beneficial outcome of such collaborations and the societal advantages gained by a non-traditional pa11nering of these two fields.« less

  10. Differential Power Analysis as a digital forensic tool.

    PubMed

    Souvignet, T; Frinken, J

    2013-07-10

    Electronic payment fraud is considered a serious international crime by Europol. An important part of this fraud comes from payment card data skimming. This type of fraud consists of an illegal acquisition of payment card details when a user is withdrawing cash at an automated teller machine (ATM) or paying at a point of sale (POS). Modern skimming devices, also known as skimmers, use secure crypto-algorithms (e.g. Advanced Encryption Standard (AES)) to protect skimmed data stored within their memory. In order to provide digital evidence in criminal cases involving skimmers, law enforcement agencies (LEAs) must retrieve the plaintext skimmed data, generally without having knowledge of the secret key. This article proposes an alternative to the current solution at the Bundeskriminalamt (BKA) to reveal the secret key. The proposed solution is non-invasive, based on Power Analysis Attack (PAA). This article first describes the structure and the behaviour of an AES skimmer, followed by the proposal of the full operational PAA process, from power measurements to attack computation. Finally, it presents results obtained in several cases, explaining the latest improvements and providing some ideas for further developments. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    PubMed

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  12. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    PubMed Central

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812

  13. nMoldyn: a program package for a neutron scattering oriented analysis of molecular dynamics simulations.

    PubMed

    Róg, T; Murzyn, K; Hinsen, K; Kneller, G R

    2003-04-15

    We present a new implementation of the program nMoldyn, which has been developed for the computation and decomposition of neutron scattering intensities from Molecular Dynamics trajectories (Comp. Phys. Commun 1995, 91, 191-214). The new implementation extends the functionality of the original version, provides a much more convenient user interface (both graphical/interactive and batch), and can be used as a tool set for implementing new analysis modules. This was made possible by the use of a high-level language, Python, and of modern object-oriented programming techniques. The quantities that can be calculated by nMoldyn are the mean-square displacement, the velocity autocorrelation function as well as its Fourier transform (the density of states) and its memory function, the angular velocity autocorrelation function and its Fourier transform, the reorientational correlation function, and several functions specific to neutron scattering: the coherent and incoherent intermediate scattering functions with their Fourier transforms, the memory function of the coherent scattering function, and the elastic incoherent structure factor. The possibility to compute memory function is a new and powerful feature that allows to relate simulation results to theoretical studies. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 657-667, 2003

  14. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    PubMed

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  15. Pervasive sensing

    NASA Astrophysics Data System (ADS)

    Nagel, David J.

    2000-11-01

    The coordinated exploitation of modern communication, micro- sensor and computer technologies makes it possible to give global reach to our senses. Web-cameras for vision, web- microphones for hearing and web-'noses' for smelling, plus the abilities to sense many factors we cannot ordinarily perceive, are either available or will be soon. Applications include (1) determination of weather and environmental conditions on dense grids or over large areas, (2) monitoring of energy usage in buildings, (3) sensing the condition of hardware in electrical power distribution and information systems, (4) improving process control and other manufacturing, (5) development of intelligent terrestrial, marine, aeronautical and space transportation systems, (6) managing the continuum of routine security monitoring, diverse crises and military actions, and (7) medicine, notably the monitoring of the physiology and living conditions of individuals. Some of the emerging capabilities, such as the ability to measure remotely the conditions inside of people in real time, raise interesting social concerns centered on privacy issues. Methods for sensor data fusion and designs for human-computer interfaces are both crucial for the full realization of the potential of pervasive sensing. Computer-generated virtual reality, augmented with real-time sensor data, should be an effective means for presenting information from distributed sensors.

  16. Density functional theory in the solid state

    PubMed Central

    Hasnip, Philip J.; Refson, Keith; Probert, Matt I. J.; Yates, Jonathan R.; Clark, Stewart J.; Pickard, Chris J.

    2014-01-01

    Density functional theory (DFT) has been used in many fields of the physical sciences, but none so successfully as in the solid state. From its origins in condensed matter physics, it has expanded into materials science, high-pressure physics and mineralogy, solid-state chemistry and more, powering entire computational subdisciplines. Modern DFT simulation codes can calculate a vast range of structural, chemical, optical, spectroscopic, elastic, vibrational and thermodynamic phenomena. The ability to predict structure–property relationships has revolutionized experimental fields, such as vibrational and solid-state NMR spectroscopy, where it is the primary method to analyse and interpret experimental spectra. In semiconductor physics, great progress has been made in the electronic structure of bulk and defect states despite the severe challenges presented by the description of excited states. Studies are no longer restricted to known crystallographic structures. DFT is increasingly used as an exploratory tool for materials discovery and computational experiments, culminating in ex nihilo crystal structure prediction, which addresses the long-standing difficult problem of how to predict crystal structure polymorphs from nothing but a specified chemical composition. We present an overview of the capabilities of solid-state DFT simulations in all of these topics, illustrated with recent examples using the CASTEP computer program. PMID:24516184

  17. Solar Power Data for Integration Studies | Grid Modernization | NREL

    Science.gov Websites

    Power Data for Integration Studies Solar Power Data for Integration Studies NREL's Solar Power Data for Integration Studies are synthetic solar photovoltaic (PV) power plant data points for the United States representing the year 2006. The data are intended for use by energy professionals-such as

  18. An evolutionary account of status, power, and career in modern societies.

    PubMed

    Fieder, Martin; Huber, Susanne

    2012-06-01

    We hypothesize that in modern societies the striving for high positions in the hierarchy of organizations is equivalent to the striving for status and power in historical and traditional societies. Analyzing a sample of 4,491 US men and 5,326 US women, we find that holding a supervisory position or being in charge of hiring and firing is positively associated with offspring count in men but not in women. The positive effect in men is attributable mainly to the higher proportion of childlessness among men in non-supervisory positions and those without the power to hire and fire. This effect is in accordance with the positive relationship between other status indicators and reproductive success found in men from traditional, historical, and modern societies. In women, we further find a curvilinear relationship between income percentile and offspring number by analyzing US census data, indicating that women may strive for resources associated with advancement rather than for status per se.

  19. Profiling an application for power consumption during execution on a compute node

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-09-17

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  20. Modern high powered led curing lights and their effect on pulp chamber temperature of bulk and incrementally cured composite resin.

    PubMed

    Oberholzer, T G; Makofane, M E; du Preez, I C; George, R

    2012-06-01

    Pulpal temperature changes induced by modern high powered light emitting diodes (LEDs) are of concern when used to cure composite resins. This study showed an increase in pulp chamber temperature with an increase in power density for all light cure units (LCU) when used to bulk cure composite resin. Amongst the three LEDs tested, the Elipar Freelight-2 recorded the highest temperature changes. Bulk curing recorded a significantly larger rise in pulp chamber temperature change than incrementally cured resin for all light types except for the Smartligh PS. Both the high powered LED and the conventional curing units can generate heat. Though this temperature rise may not be sufficient to cause irreversible pulpal damage, it would be safer to incrementally cure resins.

  1. Taking advantage of modern turbines

    DOE PAGES

    Thresher, Robert

    2018-05-14

    Here, wind facilities have generally deployed turbines of the same power and height in regular uniform arrays. Now, the modern generation of turbines, with customer-selectable tower heights and larger rotors, can significantly increase wind energy's economic potential using less land to generate cheaper electricity.

  2. Taking advantage of modern turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thresher, Robert

    Here, wind facilities have generally deployed turbines of the same power and height in regular uniform arrays. Now, the modern generation of turbines, with customer-selectable tower heights and larger rotors, can significantly increase wind energy's economic potential using less land to generate cheaper electricity.

  3. Modernization of the Slovenian National Seismic Network

    NASA Astrophysics Data System (ADS)

    Vidrih, R.; Godec, M.; Gosar, A.; Sincic, P.; Tasic, I.; Zivcic, M.

    2003-04-01

    The Environmental Agency of the Republic of Slovenia, the Seismology Office is responsible for the fast and reliable information about earthquakes, originating in the area of Slovenia and nearby. In the year 2000 the project Modernization of the Slovenian National Seismic Network started. The purpose of a modernized seismic network is to enable fast and accurate automatic location of earthquakes, to determine earthquake parameters and to collect data of local, regional and global earthquakes. The modernized network will be finished in the year 2004 and will consist of 25 Q730 remote broadband data loggers based seismic station subsystems transmitting in real-time data to the Data Center in Ljubljana, where the Seismology Office is located. The remote broadband station subsystems include 16 surface broadband seismometers CMG-40T, 5 broadband seismometers CMG-40T with strong motion accelerographs EpiSensor, 4 borehole broadband seismometers CMG-40T, all with accurate timing provided by GPS receivers. The seismic network will cover the entire Slovenian territory, involving an area of 20,256 km2. The network is planned in this way; more seismic stations will be around bigger urban centres and in regions with greater vulnerability (NW Slovenia, Krsko Brezice region). By the end of the year 2002, three old seismic stations were modernized and ten new seismic stations were built. All seismic stations transmit data to UNIX-based computers running Antelope system software. The data is transmitted in real time using TCP/IP protocols over the Goverment Wide Area Network . Real-time data is also exchanged with seismic networks in the neighbouring countries, where the data are collected from the seismic stations, close to the Slovenian border. A typical seismic station consists of the seismic shaft with the sensor and the data acquisition system and, the service shaft with communication equipment (modem, router) and power supply with a battery box. which provides energy in case of mains failure. The data acquisition systems are recording continuous time-series sampled at 200 sps, 20 sps and 1sps.

  4. Efficient Hardware Implementation of the Horn-Schunck Algorithm for High-Resolution Real-Time Dense Optical Flow Sensor

    PubMed Central

    Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek

    2014-01-01

    This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303

  5. A Computationally Efficient Visual Saliency Algorithm Suitable for an Analog CMOS Implementation.

    PubMed

    D'Angelo, Robert; Wood, Richard; Lowry, Nathan; Freifeld, Geremy; Huang, Haiyao; Salthouse, Christopher D; Hollosi, Brent; Muresan, Matthew; Uy, Wes; Tran, Nhut; Chery, Armand; Poppe, Dorothy C; Sonkusale, Sameer

    2018-06-27

    Computer vision algorithms are often limited in their application by the large amount of data that must be processed. Mammalian vision systems mitigate this high bandwidth requirement by prioritizing certain regions of the visual field with neural circuits that select the most salient regions. This work introduces a novel and computationally efficient visual saliency algorithm for performing this neuromorphic attention-based data reduction. The proposed algorithm has the added advantage that it is compatible with an analog CMOS design while still achieving comparable performance to existing state-of-the-art saliency algorithms. This compatibility allows for direct integration with the analog-to-digital conversion circuitry present in CMOS image sensors. This integration leads to power savings in the converter by quantizing only the salient pixels. Further system-level power savings are gained by reducing the amount of data that must be transmitted and processed in the digital domain. The analog CMOS compatible formulation relies on a pulse width (i.e., time mode) encoding of the pixel data that is compatible with pulse-mode imagers and slope based converters often used in imager designs. This letter begins by discussing this time-mode encoding for implementing neuromorphic architectures. Next, the proposed algorithm is derived. Hardware-oriented optimizations and modifications to this algorithm are proposed and discussed. Next, a metric for quantifying saliency accuracy is proposed, and simulation results of this metric are presented. Finally, an analog synthesis approach for a time-mode architecture is outlined, and postsynthesis transistor-level simulations that demonstrate functionality of an implementation in a modern CMOS process are discussed.

  6. Efficient hardware implementation of the Horn-Schunck algorithm for high-resolution real-time dense optical flow sensor.

    PubMed

    Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek

    2014-02-12

    This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1; 920 × 1; 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems.

  7. High performance flight computer developed for deep space applications

    NASA Technical Reports Server (NTRS)

    Bunker, Robert L.

    1993-01-01

    The development of an advanced space flight computer for real time embedded deep space applications which embodies the lessons learned on Galileo and modern computer technology is described. The requirements are listed and the design implementation that meets those requirements is described. The development of SPACE-16 (Spaceborne Advanced Computing Engine) (where 16 designates the databus width) was initiated to support the MM2 (Marine Mark 2) project. The computer is based on a radiation hardened emulation of a modern 32 bit microprocessor and its family of support devices including a high performance floating point accelerator. Additional custom devices which include a coprocessor to improve input/output capabilities, a memory interface chip, and an additional support chip that provide management of all fault tolerant features, are described. Detailed supporting analyses and rationale which justifies specific design and architectural decisions are provided. The six chip types were designed and fabricated. Testing and evaluation of a brass/board was initiated.

  8. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING

    EPA Science Inventory

    The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

  9. Design and Fabrication of Millimeter Wave Hexagonal Nano-Ferrite Circulator on Silicon CMOS Substrate

    NASA Astrophysics Data System (ADS)

    Oukacha, Hassan

    The rapid advancement of Complementary Metal Oxide Semiconductor (CMOS) technology has formed the backbone of the modern computing revolution enabling the development of computationally intensive electronic devices that are smaller, faster, less expensive, and consume less power. This well-established technology has transformed the mobile computing and communications industries by providing high levels of system integration on a single substrate, high reliability and low manufacturing cost. The driving force behind this computing revolution is the scaling of semiconductor devices to smaller geometries which has resulted in faster switching speeds and the promise of replacing traditional, bulky radio frequency (RF) components with miniaturized devices. Such devices play an important role in our society enabling ubiquitous computing and on-demand data access. This thesis presents the design and development of a magnetic circulator component in a standard 180 nm CMOS process. The design approach involves integration of nanoscale ferrite materials on a CMOS chip to avoid using bulky magnetic materials employed in conventional circulators. This device constitutes the next generation broadband millimeter-wave circulator integrated in CMOS using ferrite materials operating in the 60GHz frequency band. The unlicensed ultra-high frequency spectrum around 60GHz offers many benefits: very high immunity to interference, high security, and frequency re-use. Results of both simulations and measurements are presented in this thesis. The presented results show the benefits of this technique and the potential that it has in incorporating a complete system-on-chip (SoC) that includes low noise amplifier, power amplier, and antenna. This system-on-chip can be used in the same applications where the conventional circulator has been employed, including communication systems, radar systems, navigation and air traffic control, and military equipment. This set of applications of circulator shows how crucial this device is to many industries and the need for smaller, cost effective RF components.

  10. BOOK REVIEW: Mathematica for Theoretical Physics: Electrodynamics, Quantum Mechanics, General Relativity and Fractals

    NASA Astrophysics Data System (ADS)

    Heusler, Stefan

    2006-12-01

    The main focus of the second, enlarged edition of the book Mathematica for Theoretical Physics is on computational examples using the computer program Mathematica in various areas in physics. It is a notebook rather than a textbook. Indeed, the book is just a printout of the Mathematica notebooks included on the CD. The second edition is divided into two volumes, the first covering classical mechanics and nonlinear dynamics, the second dealing with examples in electrodynamics, quantum mechanics, general relativity and fractal geometry. The second volume is not suited for newcomers because basic and simple physical ideas which lead to complex formulas are not explained in detail. Instead, the computer technology makes it possible to write down and manipulate formulas of practically any length. For researchers with experience in computing, the book contains a lot of interesting and non-trivial examples. Most of the examples discussed are standard textbook problems, but the power of Mathematica opens the path to more sophisticated solutions. For example, the exact solution for the perihelion shift of Mercury within general relativity is worked out in detail using elliptic functions. The virial equation of state for molecules' interaction with Lennard-Jones-like potentials is discussed, including both classical and quantum corrections to the second virial coefficient. Interestingly, closed solutions become available using sophisticated computing methods within Mathematica. In my opinion, the textbook should not show formulas in detail which cover three or more pages—these technical data should just be contained on the CD. Instead, the textbook should focus on more detailed explanation of the physical concepts behind the technicalities. The discussion of the virial equation would benefit much from replacing 15 pages of Mathematica output with 15 pages of further explanation and motivation. In this combination, the power of computing merged with physical intuition would be of benefit even for newcomers. In summary, this book shows in a convincing manner how classical problems in physics can be attacked with modern computing technology. The second volume is interesting for experienced users of Mathematica. For students, the textbook can be very useful in combination with a seminar.

  11. Renewable Energy Generation and Storage Models | Grid Modernization | NREL

    Science.gov Websites

    -the-loop testing Projects Generator, Plant, and Storage Modeling, Simulation, and Validation NREL power plants. Power Hardware-in-the-Loop Testing NREL researchers are developing software-and-hardware -combined simulation testing methods known as power hardware-in-the-loop testing. Power hardware in the loop

  12. Microgrids | Grid Modernization | NREL

    Science.gov Websites

    algorithms for microgrid integration Controller hardware-in-the-loop testing, where the physical controller interacts with a model of the microgrid and associated power devices Power hardware-in-the-loop testing of operation was validated in a power hardware-in-the-loop experiment using a programmable DC power supply to

  13. Fuzzy Energy Management for a Catenary-Battery-Ultracapacitor based Hybrid Tramway

    NASA Astrophysics Data System (ADS)

    Jibin, Yang; Jiye, Zhang; Pengyun, Song

    2017-05-01

    In this paper, an energy management strategy (EMS) based on fuzzy logic control for a catenary-battery-ultracapacitor powered hybrid modern tramway was presented. The fuzzy logic controller for the catenary zone and catenary-less zone was respectively designed by analyzing the structure and working mode of the hybrid system, then an energy management strategy based on double fuzzy logic control was proposed to enhance the fuel economy. The hybrid modern tramway simulation model was developed based on MATLAB/Simulink environment. The simulation results show that the proposed EMS can satisfy the demand of dynamic performance of the tramway and achieve the power distribution reasonably between the each power source.

  14. Profiling an application for power consumption during execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-08-21

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  15. Diffraction scattering computed tomography: a window into the structures of complex nanomaterials

    PubMed Central

    Birkbak, M. E.; Leemreize, H.; Frølich, S.; Stock, S. R.

    2015-01-01

    Modern functional nanomaterials and devices are increasingly composed of multiple phases arranged in three dimensions over several length scales. Therefore there is a pressing demand for improved methods for structural characterization of such complex materials. An excellent emerging technique that addresses this problem is diffraction/scattering computed tomography (DSCT). DSCT combines the merits of diffraction and/or small angle scattering with computed tomography to allow imaging the interior of materials based on the diffraction or small angle scattering signals. This allows, e.g., one to distinguish the distributions of polymorphs in complex mixtures. Here we review this technique and give examples of how it can shed light on modern nanoscale materials. PMID:26505175

  16. A tale of two species: neural integration in zebrafish and monkeys

    PubMed Central

    Joshua, Mati; Lisberger, Stephen G.

    2014-01-01

    Selection of a model organism creates a tension between competing constraints. The recent explosion of modern molecular techniques has revolutionized the analysis of neural systems in organisms that are amenable to genetic techniques. Yet, the non-human primate remains the gold-standard for the analysis of the neural basis of behavior, and as a bridge to the operation of the human brain. The challenge is to generalize across species in a way that exposes the operation of circuits as well as the relationship of circuits to behavior. Eye movements provide an opportunity to cross the bridge from mechanism to behavior through research on diverse species. Here, we review experiments and computational studies on a circuit function called “neural integration” that occurs in the brainstems of larval zebrafish, non-human primates, and species “in between”. We show that analysis of circuit structure using modern molecular and imaging approaches in zebrafish has remarkable explanatory power for the details of the responses of integrator neurons in the monkey. The combination of research from the two species has led to a much stronger hypothesis for the implementation of the neural integrator than could have been achieved using either species alone. PMID:24797331

  17. A tale of two species: Neural integration in zebrafish and monkeys.

    PubMed

    Joshua, M; Lisberger, S G

    2015-06-18

    Selection of a model organism creates tension between competing constraints. The recent explosion of modern molecular techniques has revolutionized the analysis of neural systems in organisms that are amenable to genetic techniques. Yet, the non-human primate remains the gold-standard for the analysis of the neural basis of behavior, and as a bridge to the operation of the human brain. The challenge is to generalize across species in a way that exposes the operation of circuits as well as the relationship of circuits to behavior. Eye movements provide an opportunity to cross the bridge from mechanism to behavior through research on diverse species. Here, we review experiments and computational studies on a circuit function called "neural integration" that occurs in the brainstems of larval zebrafish, primates, and species "in between". We show that analysis of circuit structure using modern molecular and imaging approaches in zebrafish has remarkable explanatory power for details of the responses of integrator neurons in the monkey. The combination of research from the two species has led to a much stronger hypothesis for the implementation of the neural integrator than could have been achieved using either species alone. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. Code Modernization of VPIC

    NASA Astrophysics Data System (ADS)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  19. ESP IMPROVEMENTS AT POWER PLANTS

    EPA Science Inventory

    An on-going ORD and OIA collaborative project in the Newly Independent States (NIS) is designed to upgrade ESPs used in NIS power plants and has laid the foundation for implementing cost-effective ESP modernization efforts at power plants. Thus far, state-of-the-art ESP performan...

  20. Grand average ERP-image plotting and statistics: A method for comparing variability in event-related single-trial EEG activities across subjects and conditions

    PubMed Central

    Delorme, Arnaud; Miyakoshi, Makoto; Jung, Tzyy-Ping; Makeig, Scott

    2014-01-01

    With the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects. We have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability. This study demonstrates new methods for computing and visualizing grand ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute. PMID:25447029

  1. Computational Skills for Biology Students

    ERIC Educational Resources Information Center

    Gross, Louis J.

    2008-01-01

    This interview with Distinguished Science Award recipient Louis J. Gross highlights essential computational skills for modern biology, including: (1) teaching concepts listed in the Math & Bio 2010 report; (2) illustrating to students that jobs today require quantitative skills; and (3) resources and materials that focus on computational skills.

  2. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.

  3. Quo vadis: Hydrologic inverse analyses using high-performance computing and a D-Wave quantum annealer

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2017-12-01

    Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.

  4. Multipurpose electroslag remelting furnace for modern energy and heavy engineering industry

    NASA Astrophysics Data System (ADS)

    Dub, A. V.; Dub, V. S.; Kriger, Yu. N.; Levkov, L. Ya.; Shurygin, D. A.; Kissel'man, M. A.; Nekhamin, C. M.; Chernyak, A. I.; Bessonov, A. V.; Kamantsev, S. V.; Sokolov, S. O.

    2012-12-01

    In 2011, a unique complex based on a multipurpose unit-type electroslag remelting (ESR) furnace is created to meet the demand for large high-quality solid and hollow billets for the products of power, atomic, petrochemical, and heavy machine engineering. This complex has modern low-frequency power supplies with a new control level that ensure a high homogeneity and quality of the billets and an increase in the engineering-and-economical performance of the production. A unique pilot ESR furnace is erected to adjust technological conditions and the main control system elements.

  5. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  6. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  7. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  8. Improvements in the efficiency of turboexpanders in cryogenic applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agahi, R.R.; Lin, M.C.; Ershaghi, B.

    1996-12-31

    Process designers have utilized turboexpanders in cryogenic processes because of their higher thermal efficiencies when compared with conventional refrigeration cycles. Process design and equipment performance have improved substantially through the utilization of modern technologies. Turboexpander manufacturers have also adopted Computational Fluid Dynamic Software, Computer Numerical Control Technology and Holography Techniques to further improve an already impressive turboexpander efficiency performance. In this paper, the authors explain the design process of the turboexpander utilizing modern technology. Two cases of turboexpanders processing helium (4.35{degrees}K) and hydrogen (56{degrees}K) will be presented.

  9. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  10. A guide to missing data for the pediatric nephrologist.

    PubMed

    Larkins, Nicholas G; Craig, Jonathan C; Teixeira-Pinto, Armando

    2018-03-13

    Missing data is an important and common source of bias in clinical research. Readers should be alert to and consider the impact of missing data when reading studies. Beyond preventing missing data in the first place, through good study design and conduct, there are different strategies available to handle data containing missing observations. Complete case analysis is often biased unless data are missing completely at random. Better methods of handling missing data include multiple imputation and models using likelihood-based estimation. With advancing computing power and modern statistical software, these methods are within the reach of clinician-researchers under guidance of a biostatistician. As clinicians reading papers, we need to continue to update our understanding of statistical methods, so that we understand the limitations of these techniques and can critically interpret literature.

  11. The place of human psychophysics in modern neuroscience.

    PubMed

    Read, J C A

    2015-06-18

    Human psychophysics is the quantitative measurement of our own perceptions. In essence, it is simply a more sophisticated version of what humans have done since time immemorial: noticed and reflected upon what we can see, hear, and feel. In the 21st century, when hugely powerful techniques are available that enable us to probe the innermost structure and function of nervous systems, is human psychophysics still relevant? I argue that it is, and that in combination with other techniques, it will continue to be a key part of neuroscience for the foreseeable future. I discuss these points in detail using the example of binocular stereopsis, where human psychophysics in combination with physiology and computational vision, has made a substantial contribution. Copyright © 2014 The Author. Published by Elsevier Ltd.. All rights reserved.

  12. Applied mediation analyses: a review and tutorial.

    PubMed

    Lange, Theis; Hansen, Kim Wadt; Sørensen, Rikke; Galatius, Søren

    2017-01-01

    In recent years, mediation analysis has emerged as a powerful tool to disentangle causal pathways from an exposure/treatment to clinically relevant outcomes. Mediation analysis has been applied in scientific fields as diverse as labour market relations and randomized clinical trials of heart disease treatments. In parallel to these applications, the underlying mathematical theory and computer tools have been refined. This combined review and tutorial will introduce the reader to modern mediation analysis including: the mathematical framework; required assumptions; and software implementation in the R package medflex. All results are illustrated using a recent study on the causal pathways stemming from the early invasive treatment of acute coronary syndrome, for which the rich Danish population registers allow us to follow patients' medication use and more after being discharged from hospital.

  13. Publications | Energy Systems Integration Facility | NREL

    Science.gov Websites

    100% Renewable Grid: Operating Electric Power Systems with Extremely High Levels of Variable Renewable timeline. Feeder Voltage Regulation with High-Penetration PV Using Advanced Inverters and a Distribution Integrating High Levels of Variable Renewable Energy into Electric Power Systems, Journal of Modern Power

  14. Foucault-Teaching-Theology

    ERIC Educational Resources Information Center

    Beaudoin, Tom

    2003-01-01

    Religious education takes place within a postmodern culturalintellectual milieu exemplified in part by the work of Michel Foucault, which disrupts common modern concepts of knowledge and power. For Foucault, power is not only repressive but also productive, insofar as every system of knowledge depends on social arrangements of power for the…

  15. Benefits of Advanced Control Room Technologies: Phase One Upgrades to the HSSL, Research Plan, and Performance Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Blanc, Katya; Joe, Jeffrey; Rice, Brandon

    Control Room modernization is an important part of life extension for the existing light water reactor fleet. None of the 99 currently operating commercial nuclear power plants in the U.S. has completed a full-scale control room modernization to date. A full-scale modernization might, for example, entail replacement of all analog panels with digital workstations. Such modernizations have been undertaken successfully in upgrades in Europe and Asia, but the U.S. has yet to undertake a control room upgrade of this magnitude. Instead, nuclear power plant main control rooms for the existing commercial reactor fleet remain significantly analog, with only limited digitalmore » modernizations. Previous research under the U.S. Department of Energy’s Light Water Reactor Sustainability Program has helped establish a systematic process for control room upgrades that support the transition to a hybrid control room. While the guidance developed to date helps streamline the process of modernization and reduce costs and uncertainty associated with introducing digital control technologies into an existing control room, these upgrades do not achieve the full potential of newer technologies that might otherwise enhance plant and operator performance. The aim of the control room benefits research is to identify previously overlooked benefits of modernization, identify candidate technologies that may facilitate such benefits, and demonstrate these technologies through human factors research. This report describes the initial upgrades to the HSSL and outlines the methodology for a pilot test of the HSSL configuration.« less

  16. Community Coordinated Modeling Center: A Powerful Resource in Space Science and Space Weather Education

    NASA Astrophysics Data System (ADS)

    Chulaki, A.; Kuznetsova, M. M.; Rastaetter, L.; MacNeice, P. J.; Shim, J. S.; Pulkkinen, A. A.; Taktakishvili, A.; Mays, M. L.; Mendoza, A. M. M.; Zheng, Y.; Mullinix, R.; Collado-Vega, Y. M.; Maddox, M. M.; Pembroke, A. D.; Wiegand, C.

    2015-12-01

    Community Coordinated Modeling Center (CCMC) is a NASA affiliated interagency partnership with the primary goal of aiding the transition of modern space science models into space weather forecasting while supporting space science research. Additionally, over the past ten years it has established itself as a global space science education resource supporting undergraduate and graduate education and research, and spreading space weather awareness worldwide. A unique combination of assets, capabilities and close ties to the scientific and educational communities enable this small group to serve as a hub for raising generations of young space scientists and engineers. CCMC resources are publicly available online, providing unprecedented global access to the largest collection of modern space science models (developed by the international research community). CCMC has revolutionized the way simulations are utilized in classrooms settings, student projects, and scientific labs and serves hundreds of educators, students and researchers every year. Another major CCMC asset is an expert space weather prototyping team primarily serving NASA's interplanetary space weather needs. Capitalizing on its unrivaled capabilities and experiences, the team provides in-depth space weather training to students and professionals worldwide, and offers an amazing opportunity for undergraduates to engage in real-time space weather monitoring, analysis, forecasting and research. In-house development of state-of-the-art space weather tools and applications provides exciting opportunities to students majoring in computer science and computer engineering fields to intern with the software engineers at the CCMC while also learning about the space weather from the NASA scientists.

  17. Advanced secondary power system for transport aircraft

    NASA Technical Reports Server (NTRS)

    Hoffman, A. C.; Hansen, I. G.; Beach, R. F.; Plencner, R. M.; Dengler, R. P.; Jefferies, K. S.; Frye, R. J.

    1985-01-01

    A concept for an advanced aircraft power system was identified that uses 20-kHz, 440-V, sin-wave power distribution. This system was integrated with an electrically powered flight control system and with other aircraft systems requiring secondary power. The resulting all-electric secondary power configuration reduced the empty weight of a modern 200-passenger, twin-engine transport by 10 percent and the mission fuel by 9 percent.

  18. Evolution of computational chemistry, the "launch pad" to scientific computational models: The early days from a personal account, the present status from the TACC-2012 congress, and eventual future applications from the global simulation approach

    NASA Astrophysics Data System (ADS)

    Clementi, Enrico

    2012-06-01

    This is the introductory chapter to the AIP Proceedings volume "Theory and Applications of Computational Chemistry: The First Decade of the Second Millennium" where we discuss the evolution of "computational chemistry". Very early variational computational chemistry developments are reported in Sections 1 to 7, and 11, 12 by recalling some of the computational chemistry contributions by the author and his collaborators (from late 1950 to mid 1990); perturbation techniques are not considered in this already extended work. Present day's computational chemistry is partly considered in Sections 8 to 10 where more recent studies by the author and his collaborators are discussed, including the Hartree-Fock-Heitler-London method; a more general discussion on present day computational chemistry is presented in Section 14. The following chapters of this AIP volume provide a view of modern computational chemistry. Future computational chemistry developments can be extrapolated from the chapters of this AIP volume; further, in Sections 13 and 15 present an overall analysis on computational chemistry, obtained from the Global Simulation approach, by considering the evolution of scientific knowledge confronted with the opportunities offered by modern computers.

  19. Institutional Authority and Traces of Intergenerational Conflict

    ERIC Educational Resources Information Center

    Tufan, Ismail; Kilic, Sultan; Tokgoz, Nimet; Howe, Jurgen; Yaman, Hakan

    2010-01-01

    While society's level of education increases in a modernization process, the knowledge monopoly is taken over by the young. Increasing demand on knowledge attained through organized education leads to increasing power by the young. In the modernizing society of Turkey, this kind of struggle will occur between intellectual groups. Results of this…

  20. Religion, Education, and Secularism in International Agencies

    ERIC Educational Resources Information Center

    Stambach, Amy; Marshall, Katherine; Nelson, Matthew J.; Andreescu, Liviu; Kwayu, Aikande C.; Wexler, Philip; Hotam, Yotam; Fischer, Shlomo; El Bilawi, Hassan

    2011-01-01

    During the interwar years of the early twentieth century, and through at least the 1980s, education was seen by scholars, state leaders, and international agency representatives alike as a way to modernize and secularize underdeveloped communities. Arguments about the modernizing power of education did not erase or discount the presence of…

  1. Top Four Trends in Student Information Systems

    ERIC Educational Resources Information Center

    Weathers, Robert

    2013-01-01

    The modern student information systems (SIS) is a powerful administrative tool with robust functionality. As such, it is essential that school and district administrators consider the top trends in modern student information systems before going forward with system upgrades or new purchases. These trends, described herein, are: (1) Support for…

  2. China in the Year 2000: Modernization Global Power and the Strategic Balance,

    DTIC Science & Technology

    1981-03-16

    productivity, the prospects for China’s modernization are dramatically enhanced. A comparatively high level of social cohesion is another factor in...resources, political skill, social cohesion and national will to eventually become a superpower, but it will take more time than the brief span of two

  3. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-06-05

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.

  4. A Prospectus for the Future Development of a Speech Lab: Hypertext Applications.

    ERIC Educational Resources Information Center

    Berube, David M.

    This paper presents a plan for the next generation of speech laboratories which integrates technologies of modern communication in order to improve and modernize the instructional process. The paper first examines the application of intermediate technologies including audio-video recording and playback, computer assisted instruction and testing…

  5. MODERN LINGUISTICS, ITS DEVELOPMENT AND SCOPE.

    ERIC Educational Resources Information Center

    LEVIN, SAMUEL R.

    THE DEVELOPMENT OF MODERN LINGUISTICS STARTED WITH JONES' DISCOVERY IN 1786 THAT SANSKRIT IS CLOSELY RELATED TO THE CLASSICAL, GERMANIC, AND CELTIC LANGUAGES, AND HAS ADVANCED TO INCLUDE THE APPLICATION OF COMPUTERS IN LANGUAGE ANALYSIS. THE HIGHLIGHTS OF LINGUISTIC RESEARCH HAVE BEEN DE SAUSSURE'S DISTINCTION BETWEEN THE DIACHRONIC AND THE…

  6. Numeric Design and Performance Analysis of Solid Oxide Fuel Cell -- Gas Turbine Hybrids on Aircraft

    NASA Astrophysics Data System (ADS)

    Hovakimyan, Gevorg

    The aircraft industry benefits greatly from small improvements in aircraft component design. One possible area of improvement is in the Auxiliary Power Unit (APU). Modern aircraft APUs are gas turbines located in the tail section of the aircraft that generate additional power when needed. Unfortunately the efficiency of modern aircraft APUs is low. Solid Oxide Fuel Cell/Gas Turbine (SOFC/GT) hybrids are one possible alternative for replacing modern gas turbine APUs. This thesis investigates the feasibility of replacing conventional gas turbine APUs with SOFC/GT APUs on aircraft. An SOFC/GT design algorithm was created in order to determine the specifications of an SOFC/GT APU. The design algorithm is comprised of several integrated modules which together model the characteristics of each component of the SOFC/GT system. Given certain overall inputs, through numerical analysis, the algorithm produces an SOFC/GT APU, optimized for specific power and efficiency, capable of performing to the required specifications. The SOFC/GT design is then input into a previously developed quasi-dynamic SOFC/GT model to determine its load following capabilities over an aircraft flight cycle. Finally an aircraft range study is conducted to determine the feasibility of the SOFC/GT APU as a replacement for the conventional gas turbine APU. The design results show that SOFC/GT APUs have lower specific power than GT systems, but have much higher efficiencies. Moreover, the dynamic simulation results show that SOFC/GT APUs are capable of following modern flight loads. Finally, the range study determined that SOFC/GT APUs are more attractive over conventional APUs for longer range aircraft.

  7. Integrated Devices and Systems | Grid Modernization | NREL

    Science.gov Websites

    storage models Microgrids Microgrids Grid Simulation and Power Hardware-in-the-Loop Grid simulation and power hardware-in-the-loop Grid Standards and Codes Standards and codes Contact Barry Mather, Ph.D

  8. The modern Chinese family in light of economic and legal history.

    PubMed

    Huang, Philip C C

    2011-01-01

    Most social science theory and the currently powerful Chinese ideology of modernizationism assume that, with modern development, family-based peasant farm production will disappear, to be replaced by individuated industrial workers and the three-generation family by the nuclear family. The actual record of China’s economic history, however, shows the powerful persistence of the small family farm, as well as of the three-generation family down to this day, even as China’s GDP becomes the second largest in the world. China’s legal system, similarly, encompasses a vast informal sphere, in which familial principles operate more than individualist ones. And, in between the informal-familial and the formal-individualist, there is an enormous intermediate sphere in which the two tendencies are engaged in a continual tug of war. The economic behavior of the Chinese family unit reveals great contrasts with what is assumed by conventional economics. It has a different attitude toward labor from that of both the individual worker and the capitalist firm. It also has a different structural composition, and a different attitude toward investment, children’s education, and marriage. Proper attention to how Chinese modernity differs socially, economically, and legally from the modern West points to the need for a different kind of social science; it also lends social–economic substance to claims for a modern Chinese culture different from the modern West’s.

  9. Quantitative Investigation of the Technologies That Support Cloud Computing

    ERIC Educational Resources Information Center

    Hu, Wenjin

    2014-01-01

    Cloud computing is dramatically shaping modern IT infrastructure. It virtualizes computing resources, provides elastic scalability, serves as a pay-as-you-use utility, simplifies the IT administrators' daily tasks, enhances the mobility and collaboration of data, and increases user productivity. We focus on providing generalized black-box…

  10. Graphical User Interface Programming in Introductory Computer Science.

    ERIC Educational Resources Information Center

    Skolnick, Michael M.; Spooner, David L.

    Modern computing systems exploit graphical user interfaces for interaction with users; as a result, introductory computer science courses must begin to teach the principles underlying such interfaces. This paper presents an approach to graphical user interface (GUI) implementation that is simple enough for beginning students to understand, yet…

  11. Coordinating Technological Resources in a Non-Technical Profession: The Administrative Computer User Group.

    ERIC Educational Resources Information Center

    Rollo, J. Michael; Marmarchev, Helen L.

    1999-01-01

    The explosion of computer applications in the modern workplace has required student affairs professionals to keep pace with technological advances for office productivity. This article recommends establishing an administrative computer user groups, utilizing coordinated web site development, and enhancing working relationships as ways of dealing…

  12. Computer-Based Molecular Modelling: Finnish School Teachers' Experiences and Views

    ERIC Educational Resources Information Center

    Aksela, Maija; Lundell, Jan

    2008-01-01

    Modern computer-based molecular modelling opens up new possibilities for chemistry teaching at different levels. This article presents a case study seeking insight into Finnish school teachers' use of computer-based molecular modelling in teaching chemistry, into the different working and teaching methods used, and their opinions about necessary…

  13. Introduction to the theory of machines and languages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weidhaas, P. P.

    1976-04-01

    This text is intended to be an elementary ''guided tour'' through some basic concepts of modern computer science. Various models of computing machines and formal languages are studied in detail. Discussions center around questions such as, ''What is the scope of problems that can or cannot be solved by computers.''

  14. Computer-Aided Instruction in Automated Instrumentation.

    ERIC Educational Resources Information Center

    Stephenson, David T.

    1986-01-01

    Discusses functions of automated instrumentation systems, i.e., systems which combine electrical measuring instruments and a controlling computer to measure responses of a unit under test. The computer-assisted tutorial then described is programmed for use on such a system--a modern microwave spectrum analyzer--to introduce engineering students to…

  15. Research on Novel Algorithms for Smart Grid Reliability Assessment and Economic Dispatch

    NASA Astrophysics Data System (ADS)

    Luo, Wenjin

    In this dissertation, several studies of electric power system reliability and economy assessment methods are presented. To be more precise, several algorithms in evaluating power system reliability and economy are studied. Furthermore, two novel algorithms are applied to this field and their simulation results are compared with conventional results. As the electrical power system develops towards extra high voltage, remote distance, large capacity and regional networking, the application of a number of new technique equipments and the electric market system have be gradually established, and the results caused by power cut has become more and more serious. The electrical power system needs the highest possible reliability due to its complication and security. In this dissertation the Boolean logic Driven Markov Process (BDMP) method is studied and applied to evaluate power system reliability. This approach has several benefits. It allows complex dynamic models to be defined, while maintaining its easy readability as conventional methods. This method has been applied to evaluate IEEE reliability test system. The simulation results obtained are close to IEEE experimental data which means that it could be used for future study of the system reliability. Besides reliability, modern power system is expected to be more economic. This dissertation presents a novel evolutionary algorithm named as quantum evolutionary membrane algorithm (QEPS), which combines the concept and theory of quantum-inspired evolutionary algorithm and membrane computation, to solve the economic dispatch problem in renewable power system with on land and offshore wind farms. The case derived from real data is used for simulation tests. Another conventional evolutionary algorithm is also used to solve the same problem for comparison. The experimental results show that the proposed method is quick and accurate to obtain the optimal solution which is the minimum cost for electricity supplied by wind farm system.

  16. Computer hardware for radiologists: Part 2

    PubMed Central

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future. PMID:21423895

  17. Computer hardware for radiologists: Part 2.

    PubMed

    Indrajit, Ik; Alam, A

    2010-11-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the 'ever increasing' digital future.

  18. Hydrodynamic Simulations of Protoplanetary Disks with GIZMO

    NASA Astrophysics Data System (ADS)

    Rice, Malena; Laughlin, Greg

    2018-01-01

    Over the past several decades, the field of computational fluid dynamics has rapidly advanced as the range of available numerical algorithms and computationally feasible physical problems has expanded. The development of modern numerical solvers has provided a compelling opportunity to reconsider previously obtained results in search for yet undiscovered effects that may be revealed through longer integration times and more precise numerical approaches. In this study, we compare the results of past hydrodynamic disk simulations with those obtained from modern analytical resources. We focus our study on the GIZMO code (Hopkins 2015), which uses meshless methods to solve the homogeneous Euler equations of hydrodynamics while eliminating problems arising as a result of advection between grid cells. By comparing modern simulations with prior results, we hope to provide an improved understanding of the impact of fluid mechanics upon the evolution of protoplanetary disks.

  19. Adaptations in Electronic Structure Calculations in Heterogeneous Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamudupula, Sai

    Modern quantum chemistry deals with electronic structure calculations of unprecedented complexity and accuracy. They demand full power of high-performance computing and must be in tune with the given architecture for superior e ciency. To make such applications resourceaware, it is desirable to enable their static and dynamic adaptations using some external software (middleware), which may monitor both system availability and application needs, rather than mix science with system-related calls inside the application. The present work investigates scienti c application interlinking with middleware based on the example of the computational chemistry package GAMESS and middleware NICAN. The existing synchronous model ismore » limited by the possible delays due to the middleware processing time under the sustainable runtime system conditions. Proposed asynchronous and hybrid models aim at overcoming this limitation. When linked with NICAN, the fragment molecular orbital (FMO) method is capable of adapting statically and dynamically its fragment scheduling policy based on the computing platform conditions. Signi cant execution time and throughput gains have been obtained due to such static adaptations when the compute nodes have very di erent core counts. Dynamic adaptations are based on the main memory availability at run time. NICAN prompts FMO to postpone scheduling certain fragments, if there is not enough memory for their immediate execution. Hence, FMO may be able to complete the calculations whereas without such adaptations it aborts.« less

  20. Simulation training tools for nonlethal weapons using gaming environments

    NASA Astrophysics Data System (ADS)

    Donne, Alexsana; Eagan, Justin; Tse, Gabriel; Vanderslice, Tom; Woods, Jerry

    2006-05-01

    Modern simulation techniques have a growing role for evaluating new technologies and for developing cost-effective training programs. A mission simulator facilitates the productive exchange of ideas by demonstration of concepts through compellingly realistic computer simulation. Revolutionary advances in 3D simulation technology have made it possible for desktop computers to process strikingly realistic and complex interactions with results depicted in real-time. Computer games now allow for multiple real human players and "artificially intelligent" (AI) simulated robots to play together. Advances in computer processing power have compensated for the inherent intensive calculations required for complex simulation scenarios. The main components of the leading game-engines have been released for user modifications, enabling game enthusiasts and amateur programmers to advance the state-of-the-art in AI and computer simulation technologies. It is now possible to simulate sophisticated and realistic conflict situations in order to evaluate the impact of non-lethal devices as well as conflict resolution procedures using such devices. Simulations can reduce training costs as end users: learn what a device does and doesn't do prior to use, understand responses to the device prior to deployment, determine if the device is appropriate for their situational responses, and train with new devices and techniques before purchasing hardware. This paper will present the status of SARA's mission simulation development activities, based on the Half-Life gameengine, for the purpose of evaluating the latest non-lethal weapon devices, and for developing training tools for such devices.

  1. Accelerating Astronomy & Astrophysics in the New Era of Parallel Computing: GPUs, Phi and Cloud Computing

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.; Dindar, Saleh; Peters, Jorg

    2015-08-01

    The realism of astrophysical simulations and statistical analyses of astronomical data are set by the available computational resources. Thus, astronomers and astrophysicists are constantly pushing the limits of computational capabilities. For decades, astronomers benefited from massive improvements in computational power that were driven primarily by increasing clock speeds and required relatively little attention to details of the computational hardware. For nearly a decade, increases in computational capabilities have come primarily from increasing the degree of parallelism, rather than increasing clock speeds. Further increases in computational capabilities will likely be led by many-core architectures such as Graphical Processing Units (GPUs) and Intel Xeon Phi. Successfully harnessing these new architectures, requires significantly more understanding of the hardware architecture, cache hierarchy, compiler capabilities and network network characteristics.I will provide an astronomer's overview of the opportunities and challenges provided by modern many-core architectures and elastic cloud computing. The primary goal is to help an astronomical audience understand what types of problems are likely to yield more than order of magnitude speed-ups and which problems are unlikely to parallelize sufficiently efficiently to be worth the development time and/or costs.I will draw on my experience leading a team in developing the Swarm-NG library for parallel integration of large ensembles of small n-body systems on GPUs, as well as several smaller software projects. I will share lessons learned from collaborating with computer scientists, including both technical and soft skills. Finally, I will discuss the challenges of training the next generation of astronomers to be proficient in this new era of high-performance computing, drawing on experience teaching a graduate class on High-Performance Scientific Computing for Astrophysics and organizing a 2014 advanced summer school on Bayesian Computing for Astronomical Data Analysis with support of the Penn State Center for Astrostatistics and Institute for CyberScience.

  2. VLSI design of lossless frame recompression using multi-orientation prediction

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Hsuan; You, Yi-Lun; Chen, Yi-Guo

    2016-01-01

    Pursuing an experience of high-end visual quality drives human to demand a higher display resolution and a higher frame rate. Hence, a lot of powerful coding tools are aggregated together in emerging video coding standards to improve coding efficiency. This also makes video coding standards suffer from two design challenges: heavy computation and tremendous memory bandwidth. The first issue can be properly solved by a careful hardware architecture design with advanced semiconductor processes. Nevertheless, the second one becomes a critical design bottleneck for a modern video coding system. In this article, a lossless frame recompression using multi-orientation prediction technique is proposed to overcome this bottleneck. This work is realised into a silicon chip with the technology of TSMC 0.18 µm CMOS process. Its encoding capability can reach full-HD (1920 × 1080)@48 fps. The chip power consumption is 17.31 mW@100 MHz. Core area and chip area are 0.83 × 0.83 mm2 and 1.20 × 1.20 mm2, respectively. Experiment results demonstrate that this work exhibits an outstanding performance on lossless compression ratio with a competitive hardware performance.

  3. Large-Scale and Global Hydrology. Chapter 92

    NASA Technical Reports Server (NTRS)

    Rodell, Matthew; Beaudoing, Hiroko Kato; Koster, Randal; Peters-Lidard, Christa D.; Famiglietti, James S.; Lakshmi, Venkat

    2016-01-01

    Powered by the sun, water moves continuously between and through Earths oceanic, atmospheric, and terrestrial reservoirs. It enables life, shapes Earths surface, and responds to and influences climate change. Scientists measure various features of the water cycle using a combination of ground, airborne, and space-based observations, and seek to characterize it at multiple scales with the aid of numerical models. Over time our understanding of the water cycle and ability to quantify it have improved, owing to advances in observational capabilities, the extension of the data record, and increases in computing power and storage. Here we present some of the most recent estimates of global and continental ocean basin scale water cycle stocks and fluxes and provide examples of modern numerical modeling systems and reanalyses.Further, we discuss prospects for predicting water cycle variability at seasonal and longer scales, which is complicated by a changing climate and direct human impacts related to water management and agriculture. Changes to the water cycle will be among the most obvious and important facets of climate change, thus it is crucial that we continue to invest in our ability to monitor it.

  4. Small Microprocessor for ASIC or FPGA Implementation

    NASA Technical Reports Server (NTRS)

    Kleyner, Igor; Katz, Richard; Blair-Smith, Hugh

    2011-01-01

    A small microprocessor, suitable for use in applications in which high reliability is required, was designed to be implemented in either an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA). The design is based on commercial microprocessor architecture, making it possible to use available software development tools and thereby to implement the microprocessor at relatively low cost. The design features enhancements, including trapping during execution of illegal instructions. The internal structure of the design yields relatively high performance, with a significant decrease, relative to other microprocessors that perform the same functions, in the number of microcycles needed to execute macroinstructions. The problem meant to be solved in designing this microprocessor was to provide a modest level of computational capability in a general-purpose processor while adding as little as possible to the power demand, size, and weight of a system into which the microprocessor would be incorporated. As designed, this microprocessor consumes very little power and occupies only a small portion of a typical modern ASIC or FPGA. The microprocessor operates at a rate of about 4 million instructions per second with clock frequency of 20 MHz.

  5. On the matter of the reliability of the chemical monitoring system based on the modern control and monitoring devices

    NASA Astrophysics Data System (ADS)

    Andriushin, A. V.; Dolbikova, N. S.; Kiet, S. V.; Merzlikina, E. I.; Nikitina, I. S.

    2017-11-01

    The reliability of the main equipment of any power station depends on the correct water chemistry. In order to provide it, it is necessary to monitor the heat carrier quality, which, in its turn, is provided by the chemical monitoring system. Thus, the monitoring system reliability plays an important part in providing reliability of the main equipment. The monitoring system reliability is determined by the reliability and structure of its hardware and software consisting of sensors, controllers, HMI and so on [1,2]. Workers of a power plant dealing with the measuring equipment must be informed promptly about any breakdowns in the monitoring system, in this case they are able to remove the fault quickly. A computer consultant system for personnel maintaining the sensors and other chemical monitoring equipment can help to notice faults quickly and identify their possible causes. Some technical solutions for such a system are considered in the present paper. The experimental results were obtained on the laboratory and experimental workbench representing a physical model of a part of the chemical monitoring system.

  6. Application of Pinniped Vibrissae to Aeropropulsion

    NASA Technical Reports Server (NTRS)

    Shyam, Vikram; Ameri, Ali; Poinsatte, Philip; Thurman, Douglas; Wroblewski, Adam; Snyder, Christopher

    2015-01-01

    Vibrissae of Phoca Vitulina (Harbor Seal) and Mirounga Angustirostris (Elephant Seal) possessundulations along their length. Harbor Seal Vibrissae were shown to reduce vortex induced vibrations and reduce dragcompared to appropriately scaled cylinders and ellipses. Samples of Harbor Seal vibrissae, Elephant Seal vibrissae andCalifornia Sea Lion vibrissae were collected from the Marine Mammal Center in California. CT scanning, microscopy and3D scanning techniques were utilized to characterize the whiskers. Computational fluid dynamics simulations of thewhiskers were carried out to compare them to an ellipse and a cylinder. Leading edge parameters from the whiskerswere used to create a 3D profile based on a modern power turbine blade. The NASA SW-2 facility was used to performwind tunnel cascade testing on the 'Seal Blades'. Computational Fluid Dynamics simulations were used to studyincidence angles from -37 to +10 degrees on the aerodynamic performance of the Seal Blade. The tests and simulationswere conducted at a Reynolds number of 100,000. The Seal Blades showed consistent performance improvements overthe baseline configuration. It was determined that a fuel burn reduction of approximately 5 could be achieved for a fixedwing aircraft. Noise reduction potential is also explored

  7. Strategies of statistical windows in PET image reconstruction to improve the user’s real time experience

    NASA Astrophysics Data System (ADS)

    Moliner, L.; Correcher, C.; Gimenez-Alventosa, V.; Ilisie, V.; Alvarez, J.; Sanchez, S.; Rodríguez-Alvarez, M. J.

    2017-11-01

    Nowadays, with the increase of the computational power of modern computers together with the state-of-the-art reconstruction algorithms, it is possible to obtain Positron Emission Tomography (PET) images in practically real time. These facts open the door to new applications such as radio-pharmaceuticals tracking inside the body or the use of PET for image-guided procedures, such as biopsy interventions, among others. This work is a proof of concept that aims to improve the user experience with real time PET images. Fixed, incremental, overlapping, sliding and hybrid windows are the different statistical combinations of data blocks used to generate intermediate images in order to follow the path of the activity in the Field Of View (FOV). To evaluate these different combinations, a point source is placed in a dedicated breast PET device and moved along the FOV. These acquisitions are reconstructed according to the different statistical windows, resulting in a smoother transition of positions for the image reconstructions that use the sliding and hybrid window.

  8. Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics

    NASA Astrophysics Data System (ADS)

    Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.

    2006-06-01

    Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.

  9. 100 most powerful.

    PubMed

    Romano, Michael

    2002-08-26

    Compiling a list of the 100 Most Powerful People in Healthcare requires some contemplation. Exactly what is power in this industry? C. Thomas Smith, president and CEO of VHA, calls power simply "the ability to make a difference." The influential figures chosen by Modern Healthcare readers represent a broadly varied and diverse group of movers and shakers. But they share the ability to change things.

  10. Petascale Many Body Methods for Complex Correlated Systems

    NASA Astrophysics Data System (ADS)

    Pruschke, Thomas

    2012-02-01

    Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.

  11. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  12. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  13. CompatPM: enabling energy efficient multimedia workloads for distributed mobile platforms

    NASA Astrophysics Data System (ADS)

    Nathuji, Ripal; O'Hara, Keith J.; Schwan, Karsten; Balch, Tucker

    2007-01-01

    The computation and communication abilities of modern platforms are enabling increasingly capable cooperative distributed mobile systems. An example is distributed multimedia processing of sensor data in robots deployed for search and rescue, where a system manager can exploit the application's cooperative nature to optimize the distribution of roles and tasks in order to successfully accomplish the mission. Because of limited battery capacities, a critical task a manager must perform is online energy management. While support for power management has become common for the components that populate mobile platforms, what is lacking is integration and explicit coordination across the different management actions performed in a variety of system layers. This papers develops an integration approach for distributed multimedia applications, where a global manager specifies both a power operating point and a workload for a node to execute. Surprisingly, when jointly considering power and QoS, experimental evaluations show that using a simple deadline-driven approach to assigning frequencies can be non-optimal. These trends are further affected by certain characteristics of underlying power management mechanisms, which in our research, are identified as groupings that classify component power management as "compatible" (VFC) or "incompatible" (VFI) with voltage and frequency scaling. We build on these findings to develop CompatPM, a vertically integrated control strategy for power management in distributed mobile systems. Experimental evaluations of CompatPM indicate average energy improvements of 8% when platform resources are managed jointly rather than independently, demonstrating that previous attempts to maximize battery life by simply minimizing frequency are inappropriate from a platform-level perspective.

  14. Measuring Human Performance in Simulated Nuclear Power Plant Control Rooms Using Eye Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovesdi, Casey Robert; Rice, Brandon Charles; Bower, Gordon Ross

    Control room modernization will be an important part of life extension for the existing light water reactor fleet. As part of modernization efforts, personnel will need to gain a full understanding of how control room technologies affect performance of human operators. Recent advances in technology enables the use of eye tracking technology to continuously measure an operator’s eye movement, which correlates with a variety of human performance constructs such as situation awareness and workload. This report describes eye tracking metrics in the context of how they will be used in nuclear power plant control room simulator studies.

  15. Computational reacting gas dynamics

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1993-01-01

    In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).

  16. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    PubMed

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  17. Modern Transformations of the University in Britain

    ERIC Educational Resources Information Center

    Felderhof, Marius C.

    2013-01-01

    This article seeks to evaluate universities against an ideal and finds the modern university gravely wanting in terms of its governance, staff and student relationships, and in the way it delivers its teaching and research. It is corrupted by power relationships, the lack of real accountability of SMTs, the failure to recognise the disciplines and…

  18. A New Rootedness? Education in the Technological Age

    ERIC Educational Resources Information Center

    Glendinning, Simon

    2018-01-01

    This paper explores the challenges facing educators in a time when modern technology, and especially modern social technology, has an increasingly powerful hold on our lives. The educational challenge does not primarily concern questions concerning the use of technology in the classroom, or as part of the learning environment, but a changeover in…

  19. Community to Classroom: Reflections on Community-Centered Pedagogy in Contemporary Modern Dance Technique

    ERIC Educational Resources Information Center

    Fitzgerald, Mary

    2017-01-01

    This article reflects on the ways in which socially engaged arts practices can contribute to reconceptualizing the contemporary modern dance technique class as a powerful site of social change. Specifically, the author considers how incorporating socially engaged practices into pedagogical models has the potential to foster responsible citizenship…

  20. Deconstructing Gender in Revised Feminist Fairy Tales

    ERIC Educational Resources Information Center

    Mcandrew, Linda

    2013-01-01

    Power relationships are a central premise in children's literature, especially traditional fairy tales and modern feminist fairy tales. This is seen in many fairy tales where the main female character is in some distress, her Prince Charming rescues her, and they live happily ever after. Modern feminist fairy tales are understood to be a forum…

  1. A HUMAN FACTORS META MODEL FOR U.S. NUCLEAR POWER PLANT CONTROL ROOM MODERNIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joe, Jeffrey C.

    Over the last several years, the United States (U.S.) Department of Energy (DOE) has sponsored human factors research and development (R&D) and human factors engineering (HFE) activities through its Light Water Reactor Sustainability (LWRS) program to modernize the main control rooms (MCR) of commercial nuclear power plants (NPP). Idaho National Laboratory (INL), in partnership with numerous commercial nuclear utilities, has conducted some of this R&D to enable the life extension of NPPs (i.e., provide the technical basis for the long-term reliability, productivity, safety, and security of U.S. NPPs). From these activities performed to date, a human factors meta model formore » U.S. NPP control room modernization can now be formulated. This paper discusses this emergent HFE meta model for NPP control room modernization, with the goal of providing an integrated high level roadmap and guidance on how to perform human factors R&D and HFE for those in the U.S. nuclear industry that are engaging in the process of upgrading their MCRs.« less

  2. Modernity, postmodernity and disability in developing countries.

    PubMed

    Lysack, C

    1997-06-01

    This paper examines the implications of two theoretical perspectives, modernity and postmodernity, for provision of community-based disability services in developing countries. The author argues that modernity's embrace of the 'wonders' of science and technology have significantly affected our understanding of what community is. Modernity, in fact, leads us to view communities in one of two major ways: as inferior, or as ideal. Both views are deeply flawed. Postmodernity's profound scepticism of truth claims and authority provides a useful critique of community conceived in modern terms. The critique is helpful to the extent that it reveals the power of language in constructing our ideas of community. It also highlights a new way of thinking about participation, individualism and choice in community disability initiatives.

  3. Advancing Creative Visual Thinking with Constructive Function-Based Modelling

    ERIC Educational Resources Information Center

    Pasko, Alexander; Adzhiev, Valery; Malikova, Evgeniya; Pilyugin, Victor

    2013-01-01

    Modern education technologies are destined to reflect the realities of a modern digital age. The juxtaposition of real and synthetic (computer-generated) worlds as well as a greater emphasis on visual dimension are especially important characteristics that have to be taken into account in learning and teaching. We describe the ways in which an…

  4. Brønsted acidity of protic ionic liquids: a modern ab initio valence bond theory perspective.

    PubMed

    Patil, Amol Baliram; Mahadeo Bhanage, Bhalchandra

    2016-09-21

    Room temperature ionic liquids (ILs), especially protic ionic liquids (PILs), are used in many areas of the chemical sciences. Ionicity, the extent of proton transfer, is a key parameter which determines many physicochemical properties and in turn the suitability of PILs for various applications. The spectrum of computational chemistry techniques applied to investigate ionic liquids includes classical molecular dynamics, Monte Carlo simulations, ab initio molecular dynamics, Density Functional Theory (DFT), CCSD(t) etc. At the other end of the spectrum is another computational approach: modern ab initio Valence Bond Theory (VBT). VBT differs from molecular orbital theory based methods in the expression of the molecular wave function. The molecular wave function in the valence bond ansatz is expressed as a linear combination of valence bond structures. These structures include covalent and ionic structures explicitly. Modern ab initio valence bond theory calculations of representative primary and tertiary ammonium protic ionic liquids indicate that modern ab initio valence bond theory can be employed to assess the acidity and ionicity of protic ionic liquids a priori.

  5. Transforming Power Systems Through Global Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-06-01

    Ambitious and integrated policy and regulatory frameworks are crucial to achieve power system transformation. The 21st Century Power Partnership -- a multilateral initiative of the Clean Energy Ministerial -- serves as a platform for public-private collaboration to advance integrated solutions for the large-scale deployment of renewable energy in combination with energy efficiency and grid modernization.

  6. Modern Managers Move Away from the Carrot and Stick Approach.

    ERIC Educational Resources Information Center

    Stahelski, Anthony J.; Frost, Dean E.

    Studies using social power theory constructs (French and Raven, 1959) to analyze compliance attempts in field settings show that the power bases are not consistently related to any subordinate outcome variables such as job performance or attitudes. A study was undertaken to test key hypotheses derived from the social power theory concerning…

  7. Solar Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov Websites

    system with them. As system topology, operation practices, and electrics power markets evolve, system data sets (for solar, wind, and load, among others) that accurately represent system conditions. For injection into the power system at each location. Related Publications NREL Develops Sub-Hour Solar Power

  8. Factors Affecting Teachers' Adoption of Educational Computer Games: A Case Study

    ERIC Educational Resources Information Center

    Kebritchi, Mansureh

    2010-01-01

    Even though computer games hold considerable potential for engaging and facilitating learning among today's children, the adoption of modern educational computer games is still meeting significant resistance in K-12 education. The purpose of this paper is to inform educators and instructional designers on factors affecting teachers' adoption of…

  9. Let Me Share a Secret with You! Teaching with Computers.

    ERIC Educational Resources Information Center

    de Vasconcelos, Maria

    The author describes her experiences teaching a computer-enhanced Modern Poetry course. The author argues that using computers enhances the concept of the classroom as learning community. It was the author's experience that students' postings on the discussion board created an atmosphere that encouraged student involvement, as opposed to the…

  10. The State of the Art in Information Handling. Operation PEP/Executive Information Systems.

    ERIC Educational Resources Information Center

    Summers, J. K.; Sullivan, J. E.

    This document explains recent developments in computer science and information systems of interest to the educational manager. A brief history of computers is included, together with an examination of modern computers' capabilities. Various features of card, tape, and disk information storage systems are presented. The importance of time-sharing…

  11. Computer-Mediated Communication as an Autonomy-Enhancement Tool for Advanced Learners of English

    ERIC Educational Resources Information Center

    Wach, Aleksandra

    2012-01-01

    This article examines the relevance of modern technology for the development of learner autonomy in the process of learning English as a foreign language. Computer-assisted language learning and computer-mediated communication (CMC) appear to be particularly conducive to fostering autonomous learning, as they naturally incorporate many elements of…

  12. Redesigning the Quantum Mechanics Curriculum to Incorporate Problem Solving Using a Computer Algebra System

    NASA Astrophysics Data System (ADS)

    Roussel, Marc R.

    1999-10-01

    One of the traditional obstacles to learning quantum mechanics is the relatively high level of mathematical proficiency required to solve even routine problems. Modern computer algebra systems are now sufficiently reliable that they can be used as mathematical assistants to alleviate this difficulty. In the quantum mechanics course at the University of Lethbridge, the traditional three lecture hours per week have been replaced by two lecture hours and a one-hour computer-aided problem solving session using a computer algebra system (Maple). While this somewhat reduces the number of topics that can be tackled during the term, students have a better opportunity to familiarize themselves with the underlying theory with this course design. Maple is also available to students during examinations. The use of a computer algebra system expands the class of feasible problems during a time-limited exercise such as a midterm or final examination. A modern computer algebra system is a complex piece of software, so some time needs to be devoted to teaching the students its proper use. However, the advantages to the teaching of quantum mechanics appear to outweigh the disadvantages.

  13. A view of Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.

  14. Evolution of low-profile and lightweight electrical connectors for soldier-worn applications

    NASA Astrophysics Data System (ADS)

    Gans, Eric; Lee, Kang; Jannson, Tomasz; Walter, Kevin

    2011-06-01

    In addition to military radios, modern warfighters carry cell phones, GPS devices, computers, and night-vision aids, all of which require electrical cables and connectors for data and power transmission. Currently each electrical device operates via independent cables using conventional cable and connector technology. Conventional cables are stiff and difficult to integrate into a soldier-worn garment. Conventional connectors are tall and heavy, as they were designed to ensure secure connections to bulkhead-type panels, and being tall, represent significant snag-hazards in soldier-worn applications. Physical Optics Corporation has designed a new, lightweight and low-profile electrical connector that is more suitable for body-worn applications and operates much like a standard garment snap. When these connectors are mated, the combined height is <0.3 in. - a significant reduction from the 2.5 in. average height of conventional connectors. Electrical connections can be made with one hand (gloved or bare) and blindly (without looking). Furthermore, POC's connectors are integrated into systems that distribute data or power from a central location on the soldier's vest, reducing the length and weight of the cables necessary to interconnect various mission-critical electronic systems. The result is a lightweight power/data distribution system offering significant advantages over conventional electrical connectors in soldier-worn applications.

  15. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  16. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  17. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing

    PubMed Central

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2015-01-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA’s CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream. Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels. PMID:26566545

  18. Artificial intelligence. Fears of an AI pioneer.

    PubMed

    Russell, Stuart; Bohannon, John

    2015-07-17

    From the enraged robots in the 1920 play R.U.R. to the homicidal computer H.A.L. in 2001: A Space Odyssey, science fiction writers have embraced the dark side of artificial intelligence (AI) ever since the concept entered our collective imagination. Sluggish progress in AI research, especially during the “AI winter” of the 1970s and 1980s, made such worries seem far-fetched. But recent breakthroughs in machine learning and vast improvements in computational power have brought a flood of research funding— and fresh concerns about where AI may lead us. One researcher now speaking up is Stuart Russell, a computer scientist at the University of California, Berkeley, who with Peter Norvig, director of research at Google, wrote the premier AI textbook, Artificial Intelligence: A Modern Approach, now in its third edition. Last year, Russell joined the Centre for the Study of Existential Risk at Cambridge University in the United Kingdom as an AI expert focusing on “risks that could lead to human extinction.” Among his chief concerns, which he aired at an April meeting in Geneva, Switzerland, run by the United Nations, is the danger of putting military drones and weaponry under the full control of AI systems. This interview has been edited for clarity and brevity.

  19. Technology and the Modern Library.

    ERIC Educational Resources Information Center

    Boss, Richard W.

    1984-01-01

    Overview of the impact of information technology on libraries highlights turnkey vendors, bibliographic utilities, commercial suppliers of records, state and regional networks, computer-to-computer linkages, remote database searching, terminals and microcomputers, building local databases, delivery of information, digital telefacsimile,…

  20. Time for a revolution: smart energy and microgrid use in disaster response.

    PubMed

    Callaway, David Wayne; Noste, Erin; McCahill, Peter Woods; Rossman, A J; Lempereur, Dominique; Kaney, Kathleen; Swanson, Doug

    2014-06-01

    Modern health care and disaster response are inextricably linked to high volume, reliable, quality power. Disasters place major strain on energy infrastructure in affected communities. Advances in renewable energy and microgrid technology offer the potential to improve mobile disaster medical response capabilities. However, very little is known about the energy requirements of and alternative power sources in disaster response. A gap analysis of the energy components of modern disaster response reveals multiple deficiencies. The MED-1 Green Project has been executed as a multiphase project designed to identify energy utilization inefficiencies, decrease demands on diesel generators, and employ modern energy management strategies to expand operational independence. This approach, in turn, allows for longer deployments in potentially more austere environments and minimizes the unit's environmental footprint. The ultimate goal is to serve as a proof of concept for other mobile medical units to create strategies for energy independence.

  1. Imaging sport at the Grosvenor School of Modern Art (1929-37).

    PubMed

    O'Mahony, Mike

    2011-01-01

    The mass popularity of sport in Britain during the inter-war years was a source of fascination and inspiration for a group of artists working at the Grosvenor School of Modern Art in London. Although largely neglected by their contemporaries, sport was embraced by Grosvenor School artists as a means to engage with both modernity and tradition within contemporary British culture. This essay examines one work, Cyril Power's 1930 linocut print, 'The Eight', as a case study to investigate the interrelationship between two cultural activities frequently regarded as at opposing ends of the cultural spectrum: art and sport. By simultaneously drawing upon a rich heritage of visual culture conventions and deploying new media and methods to represent the excitement, dynamism and sheer energy of sport, Power's work offers an insight into how visual culture can engage with, and enhance, our understanding of contemporary debates and practices in both fields of activity.

  2. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-02-05

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  3. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D

    2012-10-23

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  4. Solar Integration Data Sets | Grid Modernization | NREL

    Science.gov Websites

    modeled solar data to study the operational impacts of solar on the electric power grid. Solar Power Data need to estimate power production from hypothetical solar power plants. Solar Integration National Dataset (SIND) Toolkit The next generation of modeled solar data with higher temporal and spatial

  5. Computed tomographic evidence of atherosclerosis in the mummified remains of humans from around the world.

    PubMed

    Thompson, Randall C; Allam, Adel H; Zink, Albert; Wann, L Samuel; Lombardi, Guido P; Cox, Samantha L; Frohlich, Bruno; Sutherland, M Linda; Sutherland, James D; Frohlich, Thomas C; King, Samantha I; Miyamoto, Michael I; Monge, Janet M; Valladolid, Clide M; El-Halim Nur El-Din, Abd; Narula, Jagat; Thompson, Adam M; Finch, Caleb E; Thomas, Gregory S

    2014-06-01

    Although atherosclerosis is widely thought to be a disease of modernity, computed tomographic evidence of atherosclerosis has been found in the bodies of a large number of mummies. This article reviews the findings of atherosclerotic calcifications in the remains of ancient people-humans who lived across a very wide span of human history and over most of the inhabited globe. These people had a wide range of diets and lifestyles and traditional modern risk factors do not thoroughly explain the presence and easy detectability of this disease. Nontraditional risk factors such as the inhalation of cooking fire smoke and chronic infection or inflammation might have been important atherogenic factors in ancient times. Study of the genetic and environmental risk factors for atherosclerosis in ancient people may offer insights into this common modern disease. Copyright © 2014 World Heart Federation (Geneva). Published by Elsevier B.V. All rights reserved.

  6. Experimental Evidence on the Effects of Home Computers on Academic Achievement among Schoolchildren. National Poverty Center Working Paper Series #13-02

    ERIC Educational Resources Information Center

    Fairlie, Robert W.; Robinson, Jonathan

    2013-01-01

    Computers are an important part of modern education, yet large segments of the population--especially low-income and minority children--lack access to a computer at home. Does this impede educational achievement? We test this hypothesis by conducting the largest-ever field experiment involving the random provision of free computers for home use to…

  7. A Computer Model for Red Blood Cell Chemistry

    DTIC Science & Technology

    1996-10-01

    5012. 13. ABSTRACT (Maximum 200 There is a growing need for interactive computational tools for medical education and research. The most exciting...paradigm for interactive education is simulation. Fluid Mod is a simulation based computational tool developed in the late sixties and early seventies at...to a modern Windows, object oriented interface. This development will provide students with a useful computational tool for learning . More important

  8. Models for the modern power grid

    NASA Astrophysics Data System (ADS)

    Nardelli, Pedro H. J.; Rubido, Nicolas; Wang, Chengwei; Baptista, Murilo S.; Pomalaza-Raez, Carlos; Cardieri, Paulo; Latva-aho, Matti

    2014-10-01

    This article reviews different kinds of models for the electric power grid that can be used to understand the modern power system, the smart grid. From the physical network to abstract energy markets, we identify in the literature different aspects that co-determine the spatio-temporal multilayer dynamics of power system. We start our review by showing how the generation, transmission and distribution characteristics of the traditional power grids are already subject to complex behaviour appearing as a result of the the interplay between dynamics of the nodes and topology, namely synchronisation and cascade effects. When dealing with smart grids, the system complexity increases even more: on top of the physical network of power lines and controllable sources of electricity, the modernisation brings information networks, renewable intermittent generation, market liberalisation, prosumers, among other aspects. In this case, we forecast a dynamical co-evolution of the smart grid and other kind of networked systems that cannot be understood isolated. This review compiles recent results that model electric power grids as complex systems, going beyond pure technological aspects. From this perspective, we then indicate possible ways to incorporate the diverse co-evolving systems into the smart grid model using, for example, network theory and multi-agent simulation.

  9. All biology is computational biology.

    PubMed

    Markowetz, Florian

    2017-03-01

    Here, I argue that computational thinking and techniques are so central to the quest of understanding life that today all biology is computational biology. Computational biology brings order into our understanding of life, it makes biological concepts rigorous and testable, and it provides a reference map that holds together individual insights. The next modern synthesis in biology will be driven by mathematical, statistical, and computational methods being absorbed into mainstream biological training, turning biology into a quantitative science.

  10. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  11. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.

  12. Extending the knowledge in histochemistry and cell biology.

    PubMed

    Heupel, Wolfgang-Moritz; Drenckhahn, Detlev

    2010-01-01

    Central to modern Histochemistry and Cell Biology stands the need for visualization of cellular and molecular processes. In the past several years, a variety of techniques has been achieved bridging traditional light microscopy, fluorescence microscopy and electron microscopy with powerful software-based post-processing and computer modeling. Researchers now have various tools available to investigate problems of interest from bird's- up to worm's-eye of view, focusing on tissues, cells, proteins or finally single molecules. Applications of new approaches in combination with well-established traditional techniques of mRNA, DNA or protein analysis have led to enlightening and prudent studies which have paved the way toward a better understanding of not only physiological but also pathological processes in the field of cell biology. This review is intended to summarize articles standing for the progress made in "histo-biochemical" techniques and their manifold applications.

  13. Computer based guidance in the modern operating room: a historical perspective.

    PubMed

    Bova, Frank

    2010-01-01

    The past few decades have seen the introduction of many different and innovative approaches aimed at enhancing surgical technique. As microprocessors have decreased in size and increased in processing power, more sophisticated systems have been developed. Some systems have attempted to provide enhanced instrument control while others have attempted to provide tools for surgical guidance. These systems include robotics, image enhancements, and frame-based and frameless guidance procedures. In almost every case the system's design goals were achieved and surgical outcomes were enhanced, yet a vast majority of today's surgical procedures are conducted without the aid of these advances. As new tools are developed and existing tools refined, special attention to the systems interface and integration into the operating room environment will be required before increased utilization of these technologies can be realized.

  14. Incorporating client-server database architecture and graphical user interface into outpatient medical records.

    PubMed Central

    Fiacco, P. A.; Rice, W. H.

    1991-01-01

    Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732

  15. Model Checking for Verification of Interactive Health IT Systems

    PubMed Central

    Butler, Keith A.; Mercer, Eric; Bahrami, Ali; Tao, Cui

    2015-01-01

    Rigorous methods for design and verification of health IT systems have lagged far behind their proliferation. The inherent technical complexity of healthcare, combined with the added complexity of health information technology makes their resulting behavior unpredictable and introduces serious risk. We propose to mitigate this risk by formalizing the relationship between HIT and the conceptual work that increasingly typifies modern care. We introduce new techniques for modeling clinical workflows and the conceptual products within them that allow established, powerful modeling checking technology to be applied to interactive health IT systems. The new capability can evaluate the workflows of a new HIT system performed by clinicians and computers to improve safety and reliability. We demonstrate the method on a patient contact system to demonstrate model checking is effective for interactive systems and that much of it can be automated. PMID:26958166

  16. Efficacy of Code Optimization on Cache-Based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  17. Baseline Evaluations to Support Control Room Modernization at Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald L.; Joe, Jeffrey C.

    2015-02-01

    For any major control room modernization activity at a commercial nuclear power plant (NPP) in the U.S., a utility should carefully follow the four phases prescribed by the U.S. Nuclear Regulatory Commission in NUREG-0711, Human Factors Engineering Program Review Model. These four phases include Planning and Analysis, Design, Verification and Validation, and Implementation and Operation. While NUREG-0711 is a useful guideline, it is written primarily from the perspective of regulatory review, and it therefore does not provide a nuanced account of many of the steps the utility might undertake as part of control room modernization. The guideline is largely summative—intendedmore » to catalog final products—rather than formative—intended to guide the overall modernization process. In this paper, we highlight two crucial formative sub-elements of the Planning and Analysis phase specific to control room modernization that are not covered in NUREG-0711. These two sub-elements are the usability and ergonomics baseline evaluations. A baseline evaluation entails evaluating the system as-built and currently in use. The usability baseline evaluation provides key insights into operator performance using the control system currently in place. The ergonomics baseline evaluation identifies possible deficiencies in the physical configuration of the control system. Both baseline evaluations feed into the design of the replacement system and subsequent summative benchmarking activities that help ensure that control room modernization represents a successful evolution of the control system.« less

  18. ExpoCast: Exposure Science for Prioritization and Toxicity Testing (T)

    EPA Science Inventory

    The US EPA National Center for Computational Toxicology (NCCT) has a mission to integrate modern computing and information technology with molecular biology to improve Agency prioritization of data requirements and risk assessment of chemicals. Recognizing the critical need for ...

  19. Government of the Soul and Genesis of the Modern Educational Discourse (1879-1911)

    ERIC Educational Resources Information Center

    Ramos do O, Jorge

    2005-01-01

    This article aims to illustrate that the modern educational project, discursively articulated until the end of the nineteenth century, owes much to the ethics that Christianity had earlier systematized, in the context of the disciplined dynamics brought by the Counter-Reformation. A kind of pastoral power remained within the enlightenment-humanist…

  20. High Education as a Common Good in China: A Case Study for Ideas and Practice

    ERIC Educational Resources Information Center

    Zhu, Jiangang

    2016-01-01

    China began its modern system of higher education with the establishment of the former Peking University in Beijing in 1898. The traditional Confucian philosophy of education has been replaced dramatically by modern western philosophy. However, since the communist party came to power in 1949, the philosophy of higher education has experienced many…

  1. Philosophy, Methodology and Action Research

    ERIC Educational Resources Information Center

    Carr, Wilfred

    2006-01-01

    The aim of this paper is to examine the role of methodology in action research. It begins by showing how, as a form of inquiry concerned with the development of practice, action research is nothing other than a modern 20th century manifestation of the pre-modern tradition of practical philosophy. It then draws in Gadamer's powerful vindication of…

  2. The Jinn and the Computer. Consumption and Identity in Arabic Children's Magazines

    ERIC Educational Resources Information Center

    Peterson, Mark Allen

    2005-01-01

    One of the fundamental problems facing middle-class Egyptian parents is the problem of how to ensure that their children are simultaneously modern and Egyptian. Arabic children's magazines offer a window into the processes by which consumption links childhood and modernity in the social imaginations of children and their parents as they construct…

  3. Adaptationism and intuitions about modern criminal justice.

    PubMed

    Petersen, Michael Bang

    2013-02-01

    Research indicates that individuals have incoherent intuitions about particular features of the criminal justice system. This could be seen as an argument against the existence of adapted computational systems for counter-exploitation. Here, I outline how the model developed by McCullough et al. readily predicts the production of conflicting intuitions in the context of modern criminal justice issues.

  4. [Some engineering problems on developing production industry of modern traditional Chinese medicine].

    PubMed

    Qu, Hai-bin; Cheng, Yi-yu; Wang, Yue-sheng

    2003-10-01

    Based on the review of some engineering problems on developing modern production industry of Traditional Chinese Medicine (TCM), the differences of TCM production industry between China and abroad were pointed out. Accelerating the application and extension of high-tech and computer integrated manufacturing system (CIMS) were suggested to promote the technology advancement of TCM industry.

  5. Status and Trend of Automotive Power Packaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Zhenxian

    2012-01-01

    Comprehensive requirements in aspects of cost, reliability, efficiency, form factor, weight, and volume for power electronics modules in modern electric drive vehicles have driven the development of automotive power packaging technology intensively. Innovation in materials, interconnections, and processing techniques is leading to enormous improvements in power modules. In this paper, the technical development of and trends in power module packaging are evaluated by examining technical details with examples of industrial products. The issues and development directions for future automotive power module packaging are also discussed.

  6. 2. CONTEXTUAL VIEW OF THE POST FALLS POWERHOUSE, WITH THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. CONTEXTUAL VIEW OF THE POST FALLS POWERHOUSE, WITH THE MODERN SUBSTATION AND OLD SWITCHING BUILDING IN THE LEFT FOREGROUND AND THE POWER PLANT IN THE RIGHT FOREGROUND, LOOKING SOUTH. - Washington Water Power Company Post Falls Power Plant, Middle Channel Powerhouse & Dam, West of intersection of Spokane & Fourth Streets, Post Falls, Kootenai County, ID

  7. Hidden Attraction - The History and Mystery of Magnetism

    NASA Astrophysics Data System (ADS)

    Verschuur, Gerrit L.

    1996-04-01

    Long one of nature's most fascinating phenomena, magnetism was once the subject of many superstitions. Magnets were thought useful to thieves, effective as a love potion, and as a cure for gout or spasms. They could remove sorcery from women and put demons to flight and even reconcile married couples. It was said that a lodestone pickled in the salt of sucking fish had the power to attract gold. Today, these beliefs have been put aside, but magnetism is no less remarkable for our modern understanding of it. In Hidden Attraction , Gerrit L. Verschuur, a noted astronomer and National Book Award nominee for The Invisible Universe , traces the history of our fascination with magnetism, from the mystery and superstition that propelled the first alchemical experiments with lodestone, through the more tangible works of Faraday, Maxwell, Hertz and other great pioneers of magnetism (scientists responsible for the extraordinary advances in modern science and technology, including radio, the telephone, and computers, that characterize the twentieth century), to state-of-the-art theories that see magnetism as a basic force in the universe. Boasting many informative illustrations, this is an adventure of the mind, using the specific phenomenon of magnetism to show how we have moved from an era of superstitions to one in which the Theory of Everything looms on the horizon.

  8. Chaotic dynamical aperture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.Y.; Tepikian, S.

    1985-01-01

    Nonlinear magnetic forces become more important for particles in the modern large accelerators. These nonlinear elements are introduced either intentionally to control beam dynamics or by uncontrollable random errors. Equations of motion in the nonlinear Hamiltonian are usually non-integrable. Because of the nonlinear part of the Hamiltonian, the tune diagram of accelerators is a jungle. Nonlinear magnet multipoles are important in keeping the accelerator operation point in the safe quarter of the hostile jungle of resonant tunes. Indeed, all the modern accelerator designs have taken advantages of nonlinear mechanics. On the other hand, the effect of the uncontrollable random multipolesmore » should be evaluated carefully. A powerful method of studying the effect of these nonlinear multipoles is using a particle tracking calculation, where a group of test particles are tracing through these magnetic multipoles in the accelerator hundreds to millions of turns in order to test the dynamical aperture of the machine. These methods are extremely useful in the design of a large accelerator such as SSC, LEP, HERA and RHIC. These calculations unfortunately take a tremendous amount of computing time. In this review the method of determining chaotic orbit and applying the method to nonlinear problems in accelerator physics is discussed. We then discuss the scaling properties and effect of random sextupoles.« less

  9. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    ERIC Educational Resources Information Center

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  10. Principles of Tablet Computing for Educators

    ERIC Educational Resources Information Center

    Katzan, Harry, Jr.

    2015-01-01

    In the study of modern technology for the 21st century, one of the most popular subjects is tablet computing. Tablet computers are now used in business, government, education, and the personal lives of practically everyone--at least, it seems that way. As of October 2013, Apple has sold 170 million iPads. The success of tablets is enormous and has…

  11. Incorporating Knowledge of Legal and Ethical Aspects into Computing Curricula of South African Universities

    ERIC Educational Resources Information Center

    Wayman, Ian; Kyobe, Michael

    2012-01-01

    As students in computing disciplines are introduced to modern information technologies, numerous unethical practices also escalate. With the increase in stringent legislations on use of IT, users of technology could easily be held liable for violation of this legislation. There is however lack of understanding of social aspects of computing, and…

  12. The Effects of Modern Mathematics Computer Games on Mathematics Achievement and Class Motivation

    ERIC Educational Resources Information Center

    Kebritchi, Mansureh; Hirumi, Atsusi; Bai, Haiyan

    2010-01-01

    This study examined the effects of a computer game on students' mathematics achievement and motivation, and the role of prior mathematics knowledge, computer skill, and English language skill on their achievement and motivation as they played the game. A total of 193 students and 10 teachers participated in this study. The teachers were randomly…

  13. A Computational Turn: Fiction and the Forms of Invention, 1965-1980

    ERIC Educational Resources Information Center

    Sims, Matthew

    2017-01-01

    This dissertation examines the relationship between American fiction and computing. More specifically, it argues that each domain explored similar formal possibilities during a period I refer to as Early Postmodernism (1965-1980). In the case of computing, the 60s and 70s represent a turning point in which modern systems and approaches were…

  14. The modern library: lost and found.

    PubMed Central

    Lindberg, D A

    1996-01-01

    The modern library, a term that was heard frequently in the mid-twentieth century, has fallen into disuse. The over-promotion of computers and all that their enthusiasts promised probably hastened its demise. Today, networking is transforming how libraries provide--and users seek--information. Although the Internet is the natural environment for the health sciences librarian, it is going through growing pains as we face issues of censorship and standards. Today's "modern librarian" must not only be adept at using the Internet but must become familiar with digital information in all its forms--images, full text, and factual data banks. Most important, to stay "modern," today's librarians must embark on a program of lifelong learning that will enable them to make optimum use of the advantages offered by modern technology. PMID:8938334

  15. Design of active temperature compensated composite free-free beam MEMS resonators in a standard process

    NASA Astrophysics Data System (ADS)

    Xereas, George; Chodavarapu, Vamsy P.

    2014-03-01

    Frequency references are used in almost every modern electronic device including mobile phones, personal computers, and scientific and medical instrumentation. With modern consumer mobile devices imposing stringent requirements of low cost, low complexity, compact system integration and low power consumption, there has been significant interest to develop batch-manufactured MEMS resonators. An important challenge for MEMS resonators is to match the frequency and temperature stability of quartz resonators. We present 1MHz and 20MHz temperature compensated Free-Free beam MEMS resonators developed using PolyMUMPS, which is a commercial multi-user process available from MEMSCAP. We introduce a novel temperature compensation technique that enables high frequency stability over a wide temperature range. We used three strategies: passive compensation by using a structural gold (Au) layer on the resonator, active compensation through using a heater element, and a Free-Free beam design that minimizes the effects of thermal mismatch between the vibrating structure and the substrate. Detailed electro-mechanical simulations were performed to evaluate the frequency response and Quality Factor (Q). Specifically, for the 20MHz device, a Q of 10,000 was obtained for the passive compensated design. Finite Element Modeling (FEM) simulations were used to evaluate the Temperature Coefficient of frequency (TCf) of the resonators between -50°C and 125°C which yielded +0.638 ppm/°C for the active compensated, compared to -1.66 ppm/°C for the passively compensated design and -8.48 ppm/°C for uncompensated design for the 20MHz device. Electro-thermo-mechanical simulations showed that the heater element was capable of increasing the temperature of the resonators by approximately 53°C with an applied voltage of 10V and power consumption of 8.42 mW.

  16. Designing an operator interface? Consider user`s `psychology`

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toffer, D.E.

    The modern operator interface is a channel of communication between operators and the plant that, ideally, provides them with information necessary to keep the plant running at maximum efficiency. Advances in automation technology have increased information flow from the field to the screen. New and improved Supervisory Control and Data Acquisition (SCADA) packages provide designers with powerful and open design considerations. All too often, however, systems go to the field designed for the software rather than the operator. Plant operators` jobs have changed fundamentally, from controlling their plants from out in the field to doing so from within control rooms.more » Control room-based operation does not denote idleness. Trained operators should be engaged in examination of plant status and cognitive evaluation of plant efficiencies. Designers who are extremely computer literate, often do not consider demographics of field operators. Many field operators have little knowledge of modern computer systems. As a result, they do not take full advantage of the interface`s capabilities. Designers often fail to understand the true nature of how operators run their plants. To aid field operators, designers must provide familiar controls and intuitive choices. To achieve success in interface design, it is necessary to understand the ways in which humans think conceptually, and to understand how they process this information physically. The physical and the conceptual are closely related when working with any type of interface. Designers should ask themselves: {open_quotes}What type of information is useful to the field operator?{close_quotes} Let`s explore an integration model that contains the following key elements: (1) Easily navigated menus; (2) Reduced chances for misunderstanding; (3) Accurate representations of the plant or operation; (4) Consistent and predictable operation; (5) A pleasant and engaging interface that conforms to the operator`s expectations. 4 figs.« less

  17. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations

    PubMed Central

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W.

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures. PMID:26904094

  18. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations.

    PubMed

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures.

  19. An efficient solution of real-time data processing for multi-GNSS network

    NASA Astrophysics Data System (ADS)

    Gong, Xiaopeng; Gu, Shengfeng; Lou, Yidong; Zheng, Fu; Ge, Maorong; Liu, Jingnan

    2017-12-01

    Global navigation satellite systems (GNSS) are acting as an indispensable tool for geodetic research and global monitoring of the Earth, and they have been rapidly developed over the past few years with abundant GNSS networks, modern constellations, and significant improvement in mathematic models of data processing. However, due to the increasing number of satellites and stations, the computational efficiency becomes a key issue and it could hamper the further development of GNSS applications. In this contribution, this problem is overcome from the aspects of both dense linear algebra algorithms and GNSS processing strategy. First, in order to fully explore the power of modern microprocessors, the square root information filter solution based on the blocked QR factorization employing as many matrix-matrix operations as possible is introduced. In addition, the algorithm complexity of GNSS data processing is further decreased by centralizing the carrier-phase observations and ambiguity parameters, as well as performing the real-time ambiguity resolution and elimination. Based on the QR factorization of the simulated matrix, we can conclude that compared to unblocked QR factorization, the blocked QR factorization can greatly improve processing efficiency with a magnitude of nearly two orders on a personal computer with four 3.30 GHz cores. Then, with 82 globally distributed stations, the processing efficiency is further validated in multi-GNSS (GPS/BDS/Galileo) satellite clock estimation. The results suggest that it will take about 31.38 s per epoch for the unblocked method. While, without any loss of accuracy, it only takes 0.50 and 0.31 s for our new algorithm per epoch for float and fixed clock solutions, respectively.

  20. Applications of absorption spectroscopy using quantum cascade lasers.

    PubMed

    Zhang, Lizhu; Tian, Guang; Li, Jingsong; Yu, Benli

    2014-01-01

    Infrared laser absorption spectroscopy (LAS) is a promising modern technique for sensing trace gases with high sensitivity, selectivity, and high time resolution. Mid-infrared quantum cascade lasers, operating in a pulsed or continuous wave mode, have potential as spectroscopic sources because of their narrow linewidths, single mode operation, tunability, high output power, reliability, low power consumption, and compactness. This paper reviews some important developments in modern laser absorption spectroscopy based on the use of quantum cascade laser (QCL) sources. Among the various laser spectroscopic methods, this review is focused on selected absorption spectroscopy applications of QCLs, with particular emphasis on molecular spectroscopy, industrial process control, combustion diagnostics, and medical breath analysis.

Top