Sample records for growing computer power

  1. A Serial Bus Architecture for Parallel Processing Systems

    DTIC Science & Technology

    1986-09-01

    pins are needed to effect the data transfer. As Integrated Circuits grow in computational power, more communication capacity is needed, pushing...chip. The wider the communication path the more pins are needed to effect the data transfer. As Integrated Circuits grow in computational power, more...13 2. A Suitable Architecture Sought 14 II. OPTIMUM ARCHITECTURE OF LARGE INTEGRATED A. PARTIONING SILICON FOR MAXIMUM 1? 1. Transistor

  2. The Experimental Mathematician: The Pleasure of Discovery and the Role of Proof

    ERIC Educational Resources Information Center

    Borwein, Jonathan M.

    2005-01-01

    The emergence of powerful mathematical computing environments, the growing availability of correspondingly powerful (multi-processor) computers and the pervasive presence of the Internet allow for mathematicians, students and teachers, to proceed heuristically and "quasi-inductively." We may increasingly use symbolic and numeric computation,…

  3. The Next Computer Revolution.

    ERIC Educational Resources Information Center

    Peled, Abraham

    1987-01-01

    Discusses some of the future trends in the use of the computer in our society, suggesting that computing is now entering a new phase in which it will grow exponentially more powerful, flexible, and sophisticated in the next decade. Describes some of the latest breakthroughs in computer hardware and software technology. (TW)

  4. Agent-Based Multicellular Modeling for Predictive Toxicology

    EPA Science Inventory

    Biological modeling is a rapidly growing field that has benefited significantly from recent technological advances, expanding traditional methods with greater computing power, parameter-determination algorithms, and the development of novel computational approaches to modeling bi...

  5. Mobile Computing: Trends Enabling Virtual Management

    ERIC Educational Resources Information Center

    Kuyatt, Alan E.

    2011-01-01

    The growing power of mobile computing, with its constantly available wireless link to information, creates an opportunity to use innovative ways to work from any location. This technological capability allows companies to remove constraints of physical proximity so that people and enterprises can work together at a distance. Mobile computing is…

  6. COMPUTER-AIDED DRUG DISCOVERY AND DEVELOPMENT (CADDD): in silico-chemico-biological approach

    PubMed Central

    Kapetanovic, I.M.

    2008-01-01

    It is generally recognized that drug discovery and development are very time and resources consuming processes. There is an ever growing effort to apply computational power to the combined chemical and biological space in order to streamline drug discovery, design, development and optimization. In biomedical arena, computer-aided or in silico design is being utilized to expedite and facilitate hit identification, hit-to-lead selection, optimize the absorption, distribution, metabolism, excretion and toxicity profile and avoid safety issues. Commonly used computational approaches include ligand-based drug design (pharmacophore, a 3-D spatial arrangement of chemical features essential for biological activity), structure-based drug design (drug-target docking), and quantitative structure-activity and quantitative structure-property relationships. Regulatory agencies as well as pharmaceutical industry are actively involved in development of computational tools that will improve effectiveness and efficiency of drug discovery and development process, decrease use of animals, and increase predictability. It is expected that the power of CADDD will grow as the technology continues to evolve. PMID:17229415

  7. Language Networks as Complex Systems

    ERIC Educational Resources Information Center

    Lee, Max Kueiming; Ou, Sheue-Jen

    2008-01-01

    Starting in the late eighties, with a growing discontent with analytical methods in science and the growing power of computers, researchers began to study complex systems such as living organisms, evolution of genes, biological systems, brain neural networks, epidemics, ecology, economy, social networks, etc. In the early nineties, the research…

  8. Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation

    DTIC Science & Technology

    2013-06-01

    exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power consumption...grow to exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power...Warp Speed 10.0. 2.0 INTRODUCTION As supercomputer systems approach exascale , the core count will exceed 1024 and number of transistors used in

  9. Techno-Human Mesh: The Growing Power of Information Technologies.

    ERIC Educational Resources Information Center

    West, Cynthia K.

    This book examines the intersection of information technologies, power, people, and bodies. It explores how information technologies are on a path of creating efficiency, productivity, profitability, surveillance, and control, and looks at the ways in which human-machine interface technologies, such as wearable computers, biometric technologies,…

  10. Defeating Insider Attacks via Autonomic Self-Protective Networks

    ERIC Educational Resources Information Center

    Sibai, Faisal M.

    2012-01-01

    There has been a constant growing security concern with insider attacks on network accessible computer systems. Users with power credentials can do almost anything they want with the systems they own with very little control or oversight. Most breaches occurring nowadays by power users are considered legitimate access and not necessarily…

  11. Personal Computing and Academic Library Design.

    ERIC Educational Resources Information Center

    Bazillion, Richard J.

    1992-01-01

    Notebook computers of increasing power and portability offer unique advantages to library users. Connecting easily to a campus data network, they are small silent work stations capable of drawing information from a variety of sources. Designers of new library buildings may assume that users in growing numbers will carry these multipurpose…

  12. Research on Using the Naturally Cold Air and the Snow for Data Center Air-conditioning, and Humidity Control

    NASA Astrophysics Data System (ADS)

    Tsuda, Kunikazu; Tano, Shunichi; Ichino, Junko

    To lower power consumption has becomes a worldwide concern. It is also becoming a bigger area in Computer Systems, such as reflected by the growing use of software-as-a-service and cloud computing whose market has increased since 2000, at the same time, the number of data centers that accumulates and manages the computer has increased rapidly. Power consumption at data centers is accounts for a big share of the entire IT power usage, and is still rapidly increasing. This research focuses on the air-conditioning that occupies accounts for the biggest portion of electric power consumption by data centers, and proposes to develop a technique to lower the power consumption by applying the natural cool air and the snow for control temperature and humidity. We verify those effectiveness of this approach by the experiment. Furthermore, we also examine the extent to which energy reduction is possible when a data center is located in Hokkaido.

  13. Federal Barriers to Innovation

    ERIC Educational Resources Information Center

    Miller, Raegen; Lake, Robin

    2012-01-01

    With educational outcomes inadequate, resources tight, and students' academic needs growing more complex, America's education system is certainly ready for technological innovation. And technology itself is ripe to be exploited. Devices harnessing cheap computing power have become smart and connected. Voice recognition, artificial intelligence,…

  14. Extended Krylov subspaces approximations of matrix functions. Application to computational electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Druskin, V.; Lee, Ping; Knizhnerman, L.

    There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.

  15. Computational nanomedicine: modeling of nanoparticle-mediated hyperthermal cancer therapy

    PubMed Central

    Kaddi, Chanchala D; Phan, John H; Wang, May D

    2016-01-01

    Nanoparticle-mediated hyperthermia for cancer therapy is a growing area of cancer nanomedicine because of the potential for localized and targeted destruction of cancer cells. Localized hyperthermal effects are dependent on many factors, including nanoparticle size and shape, excitation wavelength and power, and tissue properties. Computational modeling is an important tool for investigating and optimizing these parameters. In this review, we focus on computational modeling of magnetic and gold nanoparticle-mediated hyperthermia, followed by a discussion of new opportunities and challenges. PMID:23914967

  16. Neural network approaches to capture temporal information

    NASA Astrophysics Data System (ADS)

    van Veelen, Martijn; Nijhuis, Jos; Spaanenburg, Ben

    2000-05-01

    The automated design and construction of neural networks receives growing attention of the neural networks community. Both the growing availability of computing power and development of mathematical and probabilistic theory have had severe impact on the design and modelling approaches of neural networks. This impact is most apparent in the use of neural networks to time series prediction. In this paper, we give our views on past, contemporary and future design and modelling approaches to neural forecasting.

  17. A Decade of Neural Networks: Practical Applications and Prospects

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina E.

    1994-01-01

    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization.

  18. Press Releases | Argonne National Laboratory

    Science.gov Websites

    Electrochemical Energy Science --Center for Transportation Research --Chain Reaction Innovations --Computation renewable energy such as wind and solar power. April 25, 2018 John Carlisle, director of Chain Reaction across nation to grow startups Argonne announces second cohort of Chain Reaction Innovations. April 18

  19. Analysis of Application Power and Schedule Composition in a High Performance Computing Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmore, Ryan; Gruchalla, Kenny; Phillips, Caleb

    As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as wellmore » as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.« less

  20. The universal robot

    NASA Technical Reports Server (NTRS)

    Moravec, Hans

    1993-01-01

    Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in the next decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data - act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage to be surpassed, in stages producing robots that learn like mammals, model their world like primates, and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended some of its inherited limitations and so transformed itself into something quite new.

  1. The universal robot

    NASA Astrophysics Data System (ADS)

    Moravec, Hans

    1993-12-01

    Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in the next decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data - act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage to be surpassed, in stages producing robots that learn like mammals, model their world like primates, and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended some of its inherited limitations and so transformed itself into something quite new.

  2. Extending the length and time scales of Gram-Schmidt Lyapunov vector computations

    NASA Astrophysics Data System (ADS)

    Costa, Anthony B.; Green, Jason R.

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  3. Climate Ocean Modeling on a Beowulf Class System

    NASA Technical Reports Server (NTRS)

    Cheng, B. N.; Chao, Y.; Wang, P.; Bondarenko, M.

    2000-01-01

    With the growing power and shrinking cost of personal computers. the availability of fast ethernet interconnections, and public domain software packages, it is now possible to combine them to build desktop parallel computers (named Beowulf or PC clusters) at a fraction of what it would cost to buy systems of comparable power front supercomputer companies. This led as to build and assemble our own sys tem. specifically for climate ocean modeling. In this article, we present our experience with such a system, discuss its network performance, and provide some performance comparison data with both HP SPP2000 and Cray T3E for an ocean Model used in present-day oceanographic research.

  4. A Format for Phylogenetic Placements

    PubMed Central

    Matsen, Frederick A.; Hoffman, Noah G.; Gallagher, Aaron; Stamatakis, Alexandros

    2012-01-01

    We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement. PMID:22383988

  5. A format for phylogenetic placements.

    PubMed

    Matsen, Frederick A; Hoffman, Noah G; Gallagher, Aaron; Stamatakis, Alexandros

    2012-01-01

    We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement.

  6. Exploring Social Networking: Developing Critical Literacies

    ERIC Educational Resources Information Center

    Watson, Pauline

    2012-01-01

    While schools have been using computers within their classrooms for years now, there has been a purposeful ignoring of the growing power of social networks such as Facebook and Twitter. Many schools ban students from accessing and using sites such as Facebook at school and many English and literacy teachers ignore or deny their value as a teaching…

  7. Variable Generation Power Forecasting as a Big Data Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haupt, Sue Ellen; Kosovic, Branko

    To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less

  8. Variable Generation Power Forecasting as a Big Data Problem

    DOE PAGES

    Haupt, Sue Ellen; Kosovic, Branko

    2016-10-10

    To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less

  9. Mechanical Computing Redux: Limitations at the Nanoscale

    NASA Astrophysics Data System (ADS)

    Liu, Tsu-Jae King

    2014-03-01

    Technology solutions for overcoming the energy efficiency limits of nanoscale complementary metal oxide semiconductor (CMOS) technology ultimately will be needed in order to address the growing issue of integrated-circuit chip power density. Off-state leakage current sets a fundamental lower limit in energy per operation for any voltage-level-based digital logic implemented with transistors (CMOS and beyond), which leads to practical limits for device density (i.e. cost) and operating frequency (i.e. system performance). Mechanical switches have zero off-state leakag and hence can overcome this fundamental limit. Contact adhesive force sets a lower limit for the switching energy of a mechanical switch, however, and also directly impacts its performance. This paper will review recent progress toward the development of nano-electro-mechanical relay technology and discuss remaining challenges for realizing the promise of mechanical computing for ultra-low-power computing. Supported by the Center for Energy Efficient Electronics Science (NSF Award 0939514).

  10. Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.

  11. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Anthony B., E-mail: acosta@northwestern.edu; Green, Jason R., E-mail: jason.green@umb.edu; Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 betweenmore » Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.« less

  12. CBESW: sequence alignment on the Playstation 3.

    PubMed

    Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil

    2008-09-17

    The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. For large datasets, our implementation on the PlayStation 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. The results from our experiments demonstrate that the PlayStation 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications.

  13. CBESW: Sequence Alignment on the Playstation 3

    PubMed Central

    Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil

    2008-01-01

    Background The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation® 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. Results For large datasets, our implementation on the PlayStation® 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. Conclusion The results from our experiments demonstrate that the PlayStation® 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications. PMID:18798993

  14. Quantum machine learning: a classical perspective

    NASA Astrophysics Data System (ADS)

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  15. Quantum machine learning: a classical perspective

    PubMed Central

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed. PMID:29434508

  16. Quantum machine learning: a classical perspective.

    PubMed

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  17. Simple video format for mobile applications

    NASA Astrophysics Data System (ADS)

    Smith, John R.; Miao, Zhourong; Li, Chung-Sheng

    2000-04-01

    With the advent of pervasive computing, there is a growing demand for enabling multimedia applications on mobile devices. Large numbers of pervasive computing devices, such as personal digital assistants (PDAs), hand-held computer (HHC), smart phones, portable audio players, automotive computing devices, and wearable computers are gaining access to online information sources. However, the pervasive computing devices are often constrained along a number of dimensions, such as processing power, local storage, display size and depth, connectivity, and communication bandwidth, which makes it difficult to access rich image and video content. In this paper, we report on our initial efforts in designing a simple scalable video format with low-decoding and transcoding complexity for pervasive computing. The goal is to enable image and video access for mobile applications such as electronic catalog shopping, video conferencing, remote surveillance and video mail using pervasive computing devices.

  18. Toward an automated parallel computing environment for geosciences

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  19. Review on open source operating systems for internet of things

    NASA Astrophysics Data System (ADS)

    Wang, Zhengmin; Li, Wei; Dong, Huiliang

    2017-08-01

    Internet of Things (IoT) is an environment in which everywhere and every device became smart in a smart world. Internet of Things is growing vastly; it is an integrated system of uniquely identifiable communicating devices which exchange information in a connected network to provide extensive services. IoT devices have very limited memory, computational power, and power supply. Traditional operating systems (OS) have no way to meet the needs of IoT systems. In this paper, we thus analyze the challenges of IoT OS and survey applicable open source OSs.

  20. A review on economic emission dispatch problems using quantum computational intelligence

    NASA Astrophysics Data System (ADS)

    Mahdi, Fahad Parvez; Vasant, Pandian; Kallimani, Vish; Abdullah-Al-Wadud, M.

    2016-11-01

    Economic emission dispatch (EED) problems are one of the most crucial problems in power systems. Growing energy demand, limitation of natural resources and global warming make this topic into the center of discussion and research. This paper reviews the use of Quantum Computational Intelligence (QCI) in solving Economic Emission Dispatch problems. QCI techniques like Quantum Genetic Algorithm (QGA) and Quantum Particle Swarm Optimization (QPSO) algorithm are discussed here. This paper will encourage the researcher to use more QCI based algorithm to get better optimal result for solving EED problems.

  1. Community Science Workshops: A Powerful and Feasible Model for Serving Underserved Youth. An Evaluation Brief

    ERIC Educational Resources Information Center

    Inverness Research Associates, 2007

    2007-01-01

    The people at Inverness Research Associates spent 12 years studying Community Science Workshops (CSW) in California and in six other states. They gathered statistics on the scale, scope, and cost-efficiency of CSW services to youth. They observed youth at work in the shops--taking apart computers, repairing bikes, growing plants, and so on--and…

  2. Efficient coarse simulation of a growing avascular tumor

    PubMed Central

    Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.

    2013-01-01

    The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128

  3. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing

    PubMed Central

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P.; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique. PMID:28085932

  4. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    PubMed

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  5. The emerging role of cloud computing in molecular modelling.

    PubMed

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work

    NASA Technical Reports Server (NTRS)

    Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.

  7. Towards quantum chemistry on a quantum computer.

    PubMed

    Lanyon, B P; Whitfield, J D; Gillett, G G; Goggin, M E; Almeida, M P; Kassal, I; Biamonte, J D; Mohseni, M; Powell, B J; Barbieri, M; Aspuru-Guzik, A; White, A G

    2010-02-01

    Exact first-principles calculations of molecular properties are currently intractable because their computational cost grows exponentially with both the number of atoms and basis set size. A solution is to move to a radically different model of computing by building a quantum computer, which is a device that uses quantum systems themselves to store and process data. Here we report the application of the latest photonic quantum computer technology to calculate properties of the smallest molecular system: the hydrogen molecule in a minimal basis. We calculate the complete energy spectrum to 20 bits of precision and discuss how the technique can be expanded to solve large-scale chemical problems that lie beyond the reach of modern supercomputers. These results represent an early practical step toward a powerful tool with a broad range of quantum-chemical applications.

  8. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  9. A parallel implementation of an off-lattice individual-based model of multicellular populations

    NASA Astrophysics Data System (ADS)

    Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe

    2015-07-01

    As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.

  10. Optical interconnection networks for high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  11. Automation technology for aerospace power management

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1982-01-01

    The growing size and complexity of spacecraft power systems coupled with limited space/ground communications necessitate increasingly automated onboard control systems. Research in computer science, particularly artificial intelligence has developed methods and techniques for constructing man-machine systems with problem-solving expertise in limited domains which may contribute to the automation of power systems. Since these systems perform tasks which are typically performed by human experts they have become known as Expert Systems. A review of the current state of the art in expert systems technology is presented, and potential applications in power systems management are considered. It is concluded that expert systems appear to have significant potential for improving the productivity of operations personnel in aerospace applications, and in automating the control of many aerospace systems.

  12. Supporting 21st-Century Teaching and Learning: The Role of Google Apps for Education (GAFE)

    ERIC Educational Resources Information Center

    Awuah, Lawrence J.

    2015-01-01

    The future of higher education is likely to be driven by to the willingness to adapt and grow with the use of technologies in teaching, learning, and research. Google Apps for Education (GAFE) is a powerful cloud-computing solution that works for students regardless of their location, time, or the type of device being used. GAFE is used by…

  13. Visual Analytics for Power Grid Contingency Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less

  14. Computational Analysis of a Thermoelectric Generator for Waste-Heat Harvesting in Wearable Systems

    NASA Astrophysics Data System (ADS)

    Kossyvakis, D. N.; Vassiliadis, S. G.; Vossou, C. G.; Mangiorou, E. E.; Potirakis, S. M.; Hristoforou, E. V.

    2016-06-01

    Over recent decades, a constantly growing interest in the field of portable electronic devices has been observed. Recent developments in the scientific areas of integrated circuits and sensing technologies have enabled realization and design of lightweight low-power wearable sensing systems that can be of great use, especially for continuous health monitoring and performance recording applications. However, to facilitate wide penetration of such systems into the market, the issue of ensuring their seamless and reliable power supply still remains a major concern. In this work, the performance of a thermoelectric generator, able to exploit the temperature difference established between the human body and the environment, has been examined computationally using ANSYS 14.0 finite-element modeling (FEM) software, as a means for providing the necessary power to various portable electronic systems. The performance variation imposed due to different thermoelement geometries has been estimated to identify the most appropriate solution for the considered application. Furthermore, different ambient temperature and heat exchange conditions between the cold side of the generator and the environment have been investigated. The computational analysis indicated that power output in the order of 1.8 mW can be obtained by a 100-cm2 system, if specific design criteria can be fulfilled.

  15. Power system observability and dynamic state estimation for stability monitoring using synchrophasor measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Kai; Qi, Junjian; Kang, Wei

    2016-08-01

    Growing penetration of intermittent resources such as renewable generations increases the risk of instability in a power grid. This paper introduces the concept of observability and its computational algorithms for a power grid monitored by the wide-area measurement system (WAMS) based on synchrophasors, e.g. phasor measurement units (PMUs). The goal is to estimate real-time states of generators, especially for potentially unstable trajectories, the information that is critical for the detection of rotor angle instability of the grid. The paper studies the number and siting of synchrophasors in a power grid so that the state of the system can be accuratelymore » estimated in the presence of instability. An unscented Kalman filter (UKF) is adopted as a tool to estimate the dynamic states that are not directly measured by synchrophasors. The theory and its computational algorithms are illustrated in detail by using a 9-bus 3-generator power system model and then tested on a 140-bus 48-generator Northeast Power Coordinating Council power grid model. Case studies on those two systems demonstrate the performance of the proposed approach using a limited number of synchrophasors for dynamic state estimation for stability assessment and its robustness against moderate inaccuracies in model parameters.« less

  16. Using architecture information and real-time resource state to reduce power consumption and communication costs in parallel applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, James M.; Devine, Karen Dragon; Gentile, Ann C.

    2014-09-01

    As computer systems grow in both size and complexity, the need for applications and run-time systems to adjust to their dynamic environment also grows. The goal of the RAAMP LDRD was to combine static architecture information and real-time system state with algorithms to conserve power, reduce communication costs, and avoid network contention. We devel- oped new data collection and aggregation tools to extract static hardware information (e.g., node/core hierarchy, network routing) as well as real-time performance data (e.g., CPU uti- lization, power consumption, memory bandwidth saturation, percentage of used bandwidth, number of network stalls). We created application interfaces that allowedmore » this data to be used easily by algorithms. Finally, we demonstrated the benefit of integrating system and application information for two use cases. The first used real-time power consumption and memory bandwidth saturation data to throttle concurrency to save power without increasing application execution time. The second used static or real-time network traffic information to reduce or avoid network congestion by remapping MPI tasks to allocated processors. Results from our work are summarized in this report; more details are available in our publications [2, 6, 14, 16, 22, 29, 38, 44, 51, 54].« less

  17. TABLET: The personal computer of the year 2000

    NASA Technical Reports Server (NTRS)

    Mel, Bartlett W.; Omohundro, Stephen M.; Robison, Arch D.; Skiena, Steven S.; Thearling, Kurt H.; Young, Luke T.; Wolfram, Stephen

    1988-01-01

    The University of Illinois design of the TABLET portable computer extends the freedom of pen and notepad with a machine that draws on the projected power of 21st century technology. Without assuming any new, major technological breakthroughs, it seeks to balance the promises of today's growing technologies with the changing role of computers in tomorrow's education, research, security, and commerce. It seeks to gather together in one basket the matured fruits of such buzzword technologies as LCD, GPS, CCD, WSI, and DSP. The design is simple, yet sleek. Roughly the size and weight of a notebook, the machine is a dark, featureless monolith with no moving parts. Through magneto-optics, a simple LaserCard provides exchangeable, mass data storage. Its I/O surface, in concert with built-in infrared and cellular transceivers, puts the user in touch with anyone and anything. The ensemble of these components, directed by software that can transform it into anything from a keyboard or notepad to an office or video studio, suggests an instrument of tremendous power and freedom.

  18. [Computational chemistry in structure-based drug design].

    PubMed

    Cao, Ran; Li, Wei; Sun, Han-Zi; Zhou, Yu; Huang, Niu

    2013-07-01

    Today, the understanding of the sequence and structure of biologically relevant targets is growing rapidly and researchers from many disciplines, physics and computational science in particular, are making significant contributions to modern biology and drug discovery. However, it remains challenging to rationally design small molecular ligands with desired biological characteristics based on the structural information of the drug targets, which demands more accurate calculation of ligand binding free-energy. With the rapid advances in computer power and extensive efforts in algorithm development, physics-based computational chemistry approaches have played more important roles in structure-based drug design. Here we reviewed the newly developed computational chemistry methods in structure-based drug design as well as the elegant applications, including binding-site druggability assessment, large scale virtual screening of chemical database, and lead compound optimization. Importantly, here we address the current bottlenecks and propose practical solutions.

  19. Modeling a Wireless Network for International Space Station

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Yaprak, Ece; Lamouri, Saad

    2000-01-01

    This paper describes the application of wireless local area network (LAN) simulation modeling methods to the hybrid LAN architecture designed for supporting crew-computing tools aboard the International Space Station (ISS). These crew-computing tools, such as wearable computers and portable advisory systems, will provide crew members with real-time vehicle and payload status information and access to digital technical and scientific libraries, significantly enhancing human capabilities in space. A wireless network, therefore, will provide wearable computer and remote instruments with the high performance computational power needed by next-generation 'intelligent' software applications. Wireless network performance in such simulated environments is characterized by the sustainable throughput of data under different traffic conditions. This data will be used to help plan the addition of more access points supporting new modules and more nodes for increased network capacity as the ISS grows.

  20. Field Instrumentation With Bricks: Wireless Networks Built From Tough, Cheap, Reliable Field Computers

    NASA Astrophysics Data System (ADS)

    Fatland, D. R.; Anandakrishnan, S.; Heavner, M.

    2004-12-01

    We describe tough, cheap, reliable field computers configured as wireless networks for distributed high-volume data acquisition and low-cost data recovery. Running under the GNU/Linux open source model these network nodes ('Bricks') are intended for either autonomous or managed deployment for many months in harsh Arctic conditions. We present here results from Generation-1 Bricks used in 2004 for glacier seismology research in Alaska and Antarctica and describe future generation Bricks in terms of core capabilities and a growing list of field applications. Subsequent generations of Bricks will feature low-power embedded architecture, large data storage capacity (GB), long range telemetry (15 km+ up from 3 km currently), and robust operational software. The list of Brick applications is growing to include Geodetic GPS, Bioacoustics (bats to whales), volcano seismicity, tracking marine fauna, ice sounding via distributed microwave receivers and more. This NASA-supported STTR project capitalizes on advancing computer/wireless technology to get scientists more data per research budget dollar, solving system integration problems and thereby getting researchers out of the hardware lab and into the field. One exemplary scenario: An investigator can install a Brick network in a remote polar environment to collect data for several months and then fly over the site to recover the data via wireless telemetry. In the past year Brick networks have moved beyond proof-of-concept to the full-bore development and testing stage; they will be a mature and powerful tool available for IPY 2007-8.

  1. The CRAF/Cassini power subsystem - Preliminary design update

    NASA Technical Reports Server (NTRS)

    Atkins, Kenneth L.; Brisendine, Philip; Clark, Karla; Klein, John; Smith, Richard

    1991-01-01

    A chronology is provided of the rationale leading from the early Mariner spacecraft to the CRAF/Cassini Mariner Mark II power subsystem architecture. The display pathway began with a hybrid including a solar photovoltaic array, a radioisotope thermoelectric generator (RTG), and a battery supplying a power profile with a peak loading of about 300 W. The initial concept was to distribute power through a new solid-state, programmable switch controlled by an embedded microprocessor. As the overall mission, science, and project design matured, the power requirements increased. The design evolved from the hybrid to two RTG plus batteries to meet peak loadings of near 500 W in 1989. Later that year, circumstances led to abandonment of the distributed computer concept and a return to centralized control. Finally, as power requirements continued to grow, a third RTG was added to the design and the battery removed, with the return to the discharge-controller for transients during fault recovery procedures.

  2. Standards in Modeling and Simulation: The Next Ten Years MODSIM World Paper 2010

    NASA Technical Reports Server (NTRS)

    Collins, Andrew J.; Diallo, Saikou; Sherfey, Solomon R.; Tolk, Andreas; Turnitsa, Charles D.; Petty, Mikel; Wiesel, Eric

    2011-01-01

    The world has moved on since the introduction of the Distributed Interactive Simulation (DIS) standard in the early 1980s. The cold-war maybe over but there is still a requirement to train for and analyze the next generation of threats that face the free world. With the emergence of new and more powerful computer technology and techniques means that modeling and simulation (M&S) has become an important and growing, part in satisfying this requirement. As an industry grows, the benefits from standardization within that industry grow with it. For example, it is difficult to imagine what the USA would be like without the 110 volts standard for domestic electricity supply. This paper contains an overview of the outcomes from a recent workshop to investigate the possible future of M&S standards within the federal government.

  3. Evolutionary growth for Space Station Freedom electrical power system

    NASA Technical Reports Server (NTRS)

    Marshall, Matthew Fisk; Mclallin, Kerry; Zernic, Mike

    1989-01-01

    Over an operational lifetime of at least 30 yr, Space Station Freedom will encounter increased Space Station user requirements and advancing technologies. The Space Station electrical power system is designed with the flexibility to accommodate these emerging technologies and expert systems and is being designed with the necessary software hooks and hardware scars to accommodate increased growth demand. The electrical power system is planned to grow from the initial 75 kW up to 300 kW. The Phase 1 station will utilize photovoltaic arrays to produce the electrical power; however, for growth to 300 kW, solar dynamic power modules will be utilized. Pairs of 25 kW solar dynamic power modules will be added to the station to reach the power growth level. The addition of solar dynamic power in the growth phase places constraints in the initial Space Station systems such as guidance, navigation, and control, external thermal, truss structural stiffness, computational capabilities and storage, which must be planned-in, in order to facilitate the addition of the solar dynamic modules.

  4. Study on Electro-polymerization Nano-micro Wiring System Imitating Axonal Growth of Artificial Neurons towards Machine Learning

    NASA Astrophysics Data System (ADS)

    Dang, Nguyen Tuan; Akai-Kasada, Megumi; Asai, Tetsuya; Saito, Akira; Kuwahara, Yuji; Hokkaido University Collaboration

    2015-03-01

    Machine learning using the artificial neuron network research is supposed to be the best way to understand how the human brain trains itself to process information. In this study, we have successfully developed the programs using supervised machine learning algorithm. However, these supervised learning processes for the neuron network required the very strong computing configuration. Derivation from the necessity of increasing in computing ability and in reduction of power consumption, accelerator circuits become critical. To develop such accelerator circuits using supervised machine learning algorithm, conducting polymer micro/nanowires growing process was realized and applied as a synaptic weigh controller. In this work, high conductivity Polypyrrole (PPy) and Poly (3, 4 - ethylenedioxythiophene) PEDOT wires were potentiostatically grown crosslinking the designated electrodes, which were prefabricated by lithography, when appropriate square wave AC voltage and appropriate frequency were applied. Micro/nanowire growing process emulated the neurotransmitter release process of synapses inside a biological neuron and wire's resistance variation during the growing process was preferred to as the variation of synaptic weigh in machine learning algorithm. In a cooperation with Graduate School of Information Science and Technology, Hokkaido University.

  5. Future in biomolecular computation

    NASA Astrophysics Data System (ADS)

    Wimmer, E.

    1988-01-01

    Large-scale computations for biomolecules are dominated by three levels of theory: rigorous quantum mechanical calculations for molecules with up to about 30 atoms, semi-empirical quantum mechanical calculations for systems with up to several hundred atoms, and force-field molecular dynamics studies of biomacromolecules with 10,000 atoms and more including surrounding solvent molecules. It can be anticipated that increased computational power will allow the treatment of larger systems of ever growing complexity. Due to the scaling of the computational requirements with increasing number of atoms, the force-field approaches will benefit the most from increased computational power. On the other hand, progress in methodologies such as density functional theory will enable us to treat larger systems on a fully quantum mechanical level and a combination of molecular dynamics and quantum mechanics can be envisioned. One of the greatest challenges in biomolecular computation is the protein folding problem. It is unclear at this point, if an approach with current methodologies will lead to a satisfactory answer or if unconventional, new approaches will be necessary. In any event, due to the complexity of biomolecular systems, a hierarchy of approaches will have to be established and used in order to capture the wide ranges of length-scales and time-scales involved in biological processes. In terms of hardware development, speed and power of computers will increase while the price/performance ratio will become more and more favorable. Parallelism can be anticipated to become an integral architectural feature in a range of computers. It is unclear at this point, how fast massively parallel systems will become easy enough to use so that new methodological developments can be pursued on such computers. Current trends show that distributed processing such as the combination of convenient graphics workstations and powerful general-purpose supercomputers will lead to a new style of computing in which the calculations are monitored and manipulated as they proceed. The combination of a numeric approach with artificial-intelligence approaches can be expected to open up entirely new possibilities. Ultimately, the most exciding aspect of the future in biomolecular computing will be the unexpected discoveries.

  6. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

    PubMed Central

    Schmidhuber, Jürgen

    2013-01-01

    Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771

  7. MDTraj: A Modern Open Library for the Analysis of Molecular Dynamics Trajectories.

    PubMed

    McGibbon, Robert T; Beauchamp, Kyle A; Harrigan, Matthew P; Klein, Christoph; Swails, Jason M; Hernández, Carlos X; Schwantes, Christian R; Wang, Lee-Ping; Lane, Thomas J; Pande, Vijay S

    2015-10-20

    As molecular dynamics (MD) simulations continue to evolve into powerful computational tools for studying complex biomolecular systems, the necessity of flexible and easy-to-use software tools for the analysis of these simulations is growing. We have developed MDTraj, a modern, lightweight, and fast software package for analyzing MD simulations. MDTraj reads and writes trajectory data in a wide variety of commonly used formats. It provides a large number of trajectory analysis capabilities including minimal root-mean-square-deviation calculations, secondary structure assignment, and the extraction of common order parameters. The package has a strong focus on interoperability with the wider scientific Python ecosystem, bridging the gap between MD data and the rapidly growing collection of industry-standard statistical analysis and visualization tools in Python. MDTraj is a powerful and user-friendly software package that simplifies the analysis of MD data and connects these datasets with the modern interactive data science software ecosystem in Python. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  8. MDTraj: A Modern Open Library for the Analysis of Molecular Dynamics Trajectories

    PubMed Central

    McGibbon, Robert T.; Beauchamp, Kyle A.; Harrigan, Matthew P.; Klein, Christoph; Swails, Jason M.; Hernández, Carlos X.; Schwantes, Christian R.; Wang, Lee-Ping; Lane, Thomas J.; Pande, Vijay S.

    2015-01-01

    As molecular dynamics (MD) simulations continue to evolve into powerful computational tools for studying complex biomolecular systems, the necessity of flexible and easy-to-use software tools for the analysis of these simulations is growing. We have developed MDTraj, a modern, lightweight, and fast software package for analyzing MD simulations. MDTraj reads and writes trajectory data in a wide variety of commonly used formats. It provides a large number of trajectory analysis capabilities including minimal root-mean-square-deviation calculations, secondary structure assignment, and the extraction of common order parameters. The package has a strong focus on interoperability with the wider scientific Python ecosystem, bridging the gap between MD data and the rapidly growing collection of industry-standard statistical analysis and visualization tools in Python. MDTraj is a powerful and user-friendly software package that simplifies the analysis of MD data and connects these datasets with the modern interactive data science software ecosystem in Python. PMID:26488642

  9. A Percolation Model for Fracking

    NASA Astrophysics Data System (ADS)

    Norris, J. Q.; Turcotte, D. L.; Rundle, J. B.

    2014-12-01

    Developments in fracking technology have enabled the recovery of vast reserves of oil and gas; yet, there is very little publicly available scientific research on fracking. Traditional reservoir simulator models for fracking are computationally expensive, and require many hours on a supercomputer to simulate a single fracking treatment. We have developed a computationally inexpensive percolation model for fracking that can be used to understand the processes and risks associated with fracking. In our model, a fluid is injected from a single site and a network of fractures grows from the single site. The fracture network grows in bursts, the failure of a relatively strong bond followed by the failure of a series of relatively weak bonds. These bursts display similarities to micro seismic events observed during a fracking treatment. The bursts follow a power-law (Gutenburg-Richter) frequency-size distribution and have growth rates similar to observed earthquake moment rates. These are quantifiable features that can be compared to observed microseismicity to help understand the relationship between observed microseismicity and the underlying fracture network.

  10. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  11. Incorporating Linear Synchronous Transit Interpolation into the Growing String Method: Algorithm and Applications.

    PubMed

    Behn, Andrew; Zimmerman, Paul M; Bell, Alexis T; Head-Gordon, Martin

    2011-12-13

    The growing string method is a powerful tool in the systematic study of chemical reactions with theoretical methods which allows for the rapid identification of transition states connecting known reactant and product structures. However, the efficiency of this method is heavily influenced by the choice of interpolation scheme when adding new nodes to the string during optimization. In particular, the use of Cartesian coordinates with cubic spline interpolation often produces guess structures which are far from the final reaction path and require many optimization steps (and thus many energy and gradient calculations) to yield a reasonable final structure. In this paper, we present a new method for interpolating and reparameterizing nodes within the growing string method using the linear synchronous transit method of Halgren and Lipscomb. When applied to the alanine dipeptide rearrangement and a simplified cationic alkyl ring condensation reaction, a significant speedup in terms of computational cost is achieved (30-50%).

  12. From photons to big-data applications: terminating terabits

    PubMed Central

    2016-01-01

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. PMID:26809573

  13. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  14. From photons to big-data applications: terminating terabits.

    PubMed

    Zilberman, Noa; Moore, Andrew W; Crowcroft, Jon A

    2016-03-06

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. © 2016 The Authors.

  15. Cots Correlator Platform

    NASA Astrophysics Data System (ADS)

    Schaaf, Kjeld; Overeem, Ruud

    2004-06-01

    Moore’s law is best exploited by using consumer market hardware. In particular, the gaming industry pushes the limit of processor performance thus reducing the cost per raw flop even faster than Moore’s law predicts. Next to the cost benefits of Common-Of-The-Shelf (COTS) processing resources, there is a rapidly growing experience pool in cluster based processing. The typical Beowulf cluster of PC’s supercomputers are well known. Multiple examples exists of specialised cluster computers based on more advanced server nodes or even gaming stations. All these cluster machines build upon the same knowledge about cluster software management, scheduling, middleware libraries and mathematical libraries. In this study, we have integrated COTS processing resources and cluster nodes into a very high performance processing platform suitable for streaming data applications, in particular to implement a correlator. The required processing power for the correlator in modern radio telescopes is in the range of the larger supercomputers, which motivates the usage of supercomputer technology. Raw processing power is provided by graphical processors and is combined with an Infiniband host bus adapter with integrated data stream handling logic. With this processing platform a scalable correlator can be built with continuously growing processing power at consumer market prices.

  16. Bragg gratings inscription in step-index PMMA optical fiber by femtosecond laser pulses at 400 nm

    NASA Astrophysics Data System (ADS)

    Hu, X.; Kinet, D.; Chah, K.; Mégret, P.; Caucheteur, C.

    2016-05-01

    In this paper, we report photo-inscription of uniform Bragg gratings in trans-4-stilbenemethanol-doped photosensitive step-index polymer optical fiber. Gratings were produced at ~1575 nm by the phase mask technique with a femtosecond laser emitting at 400 nm with different average optical powers (8 mW, 13 mW and 20 mW). The grating growth dynamics in transmission were monitored during the manufacturing process, showing that the grating grows faster with higher power. Using 20 mW laser beam power, the reflectivity reaches 94 % (8 dB transmission loss) in 70 seconds. Finally, the gratings were characterized in temperature in the range 20 - 45 °C. The thermal sensitivity has been computed equal to - 86.6 pm/°C.

  17. Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.

    PubMed

    Pearl, Lisa S; Sprouse, Jon

    2015-06-01

    Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.

  18. Application of Nearly Linear Solvers to Electric Power System Computation

    NASA Astrophysics Data System (ADS)

    Grant, Lisa L.

    To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.

  19. Towards energy-efficient photonic interconnects

    NASA Astrophysics Data System (ADS)

    Demir, Yigit; Hardavellas, Nikos

    2015-03-01

    Silicon photonics have emerged as a promising solution to meet the growing demand for high-bandwidth, low-latency, and energy-efficient on-chip and off-chip communication in many-core processors. However, current silicon-photonic interconnect designs for many-core processors waste a significant amount of power because (a) lasers are always on, even during periods of interconnect inactivity, and (b) microring resonators employ heaters which consume a significant amount of power just to overcome thermal variations and maintain communication on the photonic links, especially in a 3D-stacked design. The problem of high laser power consumption is particularly important as lasers typically have very low energy efficiency, and photonic interconnects often remain underutilized both in scientific computing (compute-intensive execution phases underutilize the interconnect), and in server computing (servers in Google-scale datacenters have a typical utilization of less than 30%). We address the high laser power consumption by proposing EcoLaser+, which is a laser control scheme that saves energy by predicting the interconnect activity and opportunistically turning the on-chip laser off when possible, and also by scaling the width of the communication link based on a runtime prediction of the expected message length. Our laser control scheme can save up to 62 - 92% of the laser energy, and improve the energy efficiency of a manycore processor with negligible performance penalty. We address the high trimming (heating) power consumption of the microrings by proposing insulation methods that reduce the impact of localized heating induced by highly-active components on the 3D-stacked logic die.

  20. A distributed big data storage and data mining framework for solar-generated electricity quantity forecasting

    NASA Astrophysics Data System (ADS)

    Wang, Jianzong; Chen, Yanjun; Hua, Rui; Wang, Peng; Fu, Jia

    2012-02-01

    Photovoltaic is a method of generating electrical power by converting solar radiation into direct current electricity using semiconductors that exhibit the photovoltaic effect. Photovoltaic power generation employs solar panels composed of a number of solar cells containing a photovoltaic material. Due to the growing demand for renewable energy sources, the manufacturing of solar cells and photovoltaic arrays has advanced considerably in recent years. Solar photovoltaics are growing rapidly, albeit from a small base, to a total global capacity of 40,000 MW at the end of 2010. More than 100 countries use solar photovoltaics. Driven by advances in technology and increases in manufacturing scale and sophistication, the cost of photovoltaic has declined steadily since the first solar cells were manufactured. Net metering and financial incentives, such as preferential feed-in tariffs for solar-generated electricity; have supported solar photovoltaics installations in many countries. However, the power that generated by solar photovoltaics is affected by the weather and other natural factors dramatically. To predict the photovoltaic energy accurately is of importance for the entire power intelligent dispatch in order to reduce the energy dissipation and maintain the security of power grid. In this paper, we have proposed a big data system--the Solar Photovoltaic Power Forecasting System, called SPPFS to calculate and predict the power according the real-time conditions. In this system, we utilized the distributed mixed database to speed up the rate of collecting, storing and analysis the meteorological data. In order to improve the accuracy of power prediction, the given neural network algorithm has been imported into SPPFS.By adopting abundant experiments, we shows that the framework can provide higher forecast accuracy-error rate less than 15% and obtain low latency of computing by deploying the mixed distributed database architecture for solar-generated electricity.

  1. Clinostat Delivers Power To Plant-Growth Cabinets

    NASA Technical Reports Server (NTRS)

    Bushong, Wilton E.; Fox, Ronald C.; Brown, Christopher S.; Biro, Ronald R.; Dreshel, Thomas W.

    1993-01-01

    Clinostat rotates coaxial pair of plant-growth cabinets about horizontal axis while supplying cabinets with electric power for built-in computers, lamps, fans, and auxiliary equipment, such as nutrient pumps. Each cabinet self-contained unit for growing plants in controlled environment. By rotating cabinets and contents about horizontal axis, scientists simulate and study some of effects of microgravity on growth of plants. Clinostat includes vertical aluminum mounting bracket on horizontal aluminum base. Bearings on bracket hold shaft with V-belt pulley. At each end of shaft, circular plate holds frame mount for cabinet. Mounting plates also used to hold transparent sealed growth chambers described in article, "Sealed Plant-Growth Chamber For Clinostat" (KSC-11538).

  2. Placing US Air Force Information Technology Investment Under the Nanoscope A Clear Vision of Nanotechnology’s Impact on Computing in 2030

    DTIC Science & Technology

    2007-04-01

    effectively . Another serious problem is the growing power consumption for high-performance logic chips. If increasing clock frequency and IC density...n) Study Effect of Nanomaterials on Environment What is your judgment of the potential of the various responses based on your knowledge and the...open research closely coupled to internal development and deployment. (n) Study Effect of Nanomaterials on Environment (o) Long-Term, Balanced IT

  3. Experimental Investigation and Computer Modeling of Optical Switching in Distributed Bragg Reflector and Vertical Cavity Surface Emitting Laser Structures.

    DTIC Science & Technology

    1995-12-01

    of a Molecular Beam Epitaxy (MBE) system prior to growing a Vertical Cavity Surface Emitting Laser ( VCSEL ). VCSEL bistability is discussed later in...addition, optical bistability 1 in the reflectivity of a DBR, as well as in the lasing power, wavelength, and beam divergence of a lasing VCSEL are...Spectral Reflectivity of AlGaAs/AlAs VCSEL Top DBR Mirror Cavity Bottom DBR Mirror Substrate Output Beam Resonance Pump Minimum Stop Band Figure 2. VCSEL

  4. Materials discovery guided by data-driven insights

    NASA Astrophysics Data System (ADS)

    Klintenberg, Mattias

    As the computational power continues to grow systematic computational exploration has become an important tool for materials discovery. In this presentation the Electronic Structure Project (ESP/ELSA) will be discussed and a number of examples presented that show some of the capabilities of a data-driven methodology for guiding materials discovery. These examples include topological insulators, detector materials and 2D materials. ESP/ELSA is an initiative that dates back to 2001 and today contain many tens of thousands of materials that have been investigated using a robust and high accuracy electronic structure method (all-electron FP-LMTO) thus providing basic materials first-principles data for most inorganic compounds that have been structurally characterized. The web-site containing the ESP/ELSA data has as of today been accessed from more than 4,000 unique computers from all around the world.

  5. A Systematic Review of Tablet Computers and Portable Media Players as Speech Generating Devices for Individuals with Autism Spectrum Disorder.

    PubMed

    Lorah, Elizabeth R; Parnell, Ashley; Whitby, Peggy Schaefer; Hantula, Donald

    2015-12-01

    Powerful, portable, off-the-shelf handheld devices, such as tablet based computers (i.e., iPad(®); Galaxy(®)) or portable multimedia players (i.e., iPod(®)), can be adapted to function as speech generating devices for individuals with autism spectrum disorders or related developmental disabilities. This paper reviews the research in this new and rapidly growing area and delineates an agenda for future investigations. In general, participants using these devices acquired verbal repertoires quickly. Studies comparing these devices to picture exchange or manual sign language found that acquisition was often quicker when using a tablet computer and that the vast majority of participants preferred using the device to picture exchange or manual sign language. Future research in interface design, user experience, and extended verbal repertoires is recommended.

  6. Eruptive event generator based on the Gibson-Low magnetic configuration

    NASA Astrophysics Data System (ADS)

    Borovikov, D.; Sokolov, I. V.; Manchester, W. B.; Jin, M.; Gombosi, T. I.

    2017-08-01

    Coronal mass ejections (CMEs), a kind of energetic solar eruptions, are an integral subject of space weather research. Numerical magnetohydrodynamic (MHD) modeling, which requires powerful computational resources, is one of the primary means of studying the phenomenon. With increasing accessibility of such resources, grows the demand for user-friendly tools that would facilitate the process of simulating CMEs for scientific and operational purposes. The Eruptive Event Generator based on Gibson-Low flux rope (EEGGL), a new publicly available computational model presented in this paper, is an effort to meet this demand. EEGGL allows one to compute the parameters of a model flux rope driving a CME via an intuitive graphical user interface. We provide a brief overview of the physical principles behind EEGGL and its functionality. Ways toward future improvements of the tool are outlined.

  7. Real-Time Spatio-Temporal Twice Whitening for MIMO Energy Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; Mitra, Pramita; Barhen, Jacob

    2010-01-01

    While many techniques exist for local spectrum sensing of a primary user, each represents a computationally demanding task to secondary user receivers. In software-defined radio, computational complexity lengthens the time for a cognitive radio to recognize changes in the transmission environment. This complexity is even more significant for spatially multiplexed receivers, e.g., in SIMO and MIMO, where the spatio-temporal data sets grow in size with the number of antennae. Limits on power and space for the processor hardware further constrain SDR performance. In this report, we discuss improvements in spatio-temporal twice whitening (STTW) for real-time local spectrum sensing by demonstratingmore » a form of STTW well suited for MIMO environments. We implement STTW on the Coherent Logix hx3100 processor, a multicore processor intended for low-power, high-throughput software-defined signal processing. These results demonstrate how coupling the novel capabilities of emerging multicore processors with algorithmic advances can enable real-time, software-defined processing of large spatio-temporal data sets.« less

  8. Dynamic VMs placement for energy efficiency by PSO in cloud computing

    NASA Astrophysics Data System (ADS)

    Dashti, Seyed Ebrahim; Rahmani, Amir Masoud

    2016-03-01

    Recently, cloud computing is growing fast and helps to realise other high technologies. In this paper, we propose a hieratical architecture to satisfy both providers' and consumers' requirements in these technologies. We design a new service in the PaaS layer for scheduling consumer tasks. In the providers' perspective, incompatibility between specification of physical machine and user requests in cloud leads to problems such as energy-performance trade-off and large power consumption so that profits are decreased. To guarantee Quality of service of users' tasks, and reduce energy efficiency, we proposed to modify Particle Swarm Optimisation to reallocate migrated virtual machines in the overloaded host. We also dynamically consolidate the under-loaded host which provides power saving. Simulation results in CloudSim demonstrated that whatever simulation condition is near to the real environment, our method is able to save as much as 14% more energy and the number of migrations and simulation time significantly reduces compared with the previous works.

  9. AstroML: Python-powered Machine Learning for Astronomy

    NASA Astrophysics Data System (ADS)

    Vander Plas, Jake; Connolly, A. J.; Ivezic, Z.

    2014-01-01

    As astronomical data sets grow in size and complexity, automated machine learning and data mining methods are becoming an increasingly fundamental component of research in the field. The astroML project (http://astroML.org) provides a common repository for practical examples of the data mining and machine learning tools used and developed by astronomical researchers, written in Python. The astroML module contains a host of general-purpose data analysis and machine learning routines, loaders for openly-available astronomical datasets, and fast implementations of specific computational methods often used in astronomy and astrophysics. The associated website features hundreds of examples of these routines being used for analysis of real astronomical datasets, while the associated textbook provides a curriculum resource for graduate-level courses focusing on practical statistics, machine learning, and data mining approaches within Astronomical research. This poster will highlight several of the more powerful and unique examples of analysis performed with astroML, all of which can be reproduced in their entirety on any computer with the proper packages installed.

  10. Nanoscale RRAM-based synaptic electronics: toward a neuromorphic computing device.

    PubMed

    Park, Sangsu; Noh, Jinwoo; Choo, Myung-Lae; Sheri, Ahmad Muqeem; Chang, Man; Kim, Young-Bae; Kim, Chang Jung; Jeon, Moongu; Lee, Byung-Geun; Lee, Byoung Hun; Hwang, Hyunsang

    2013-09-27

    Efforts to develop scalable learning algorithms for implementation of networks of spiking neurons in silicon have been hindered by the considerable footprints of learning circuits, which grow as the number of synapses increases. Recent developments in nanotechnologies provide an extremely compact device with low-power consumption.In particular, nanoscale resistive switching devices (resistive random-access memory (RRAM)) are regarded as a promising solution for implementation of biological synapses due to their nanoscale dimensions, capacity to store multiple bits and the low energy required to operate distinct states. In this paper, we report the fabrication, modeling and implementation of nanoscale RRAM with multi-level storage capability for an electronic synapse device. In addition, we first experimentally demonstrate the learning capabilities and predictable performance by a neuromorphic circuit composed of a nanoscale 1 kbit RRAM cross-point array of synapses and complementary metal-oxide-semiconductor neuron circuits. These developments open up possibilities for the development of ubiquitous ultra-dense, ultra-low-power cognitive computers.

  11. On Parallelizing Single Dynamic Simulation Using HPC Techniques and APIs of Commercial Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diao, Ruisheng; Jin, Shuangshuang; Howell, Frederic

    Time-domain simulations are heavily used in today’s planning and operation practices to assess power system transient stability and post-transient voltage/frequency profiles following severe contingencies to comply with industry standards. Because of the increased modeling complexity, it is several times slower than real time for state-of-the-art commercial packages to complete a dynamic simulation for a large-scale model. With the growing stochastic behavior introduced by emerging technologies, power industry has seen a growing need for performing security assessment in real time. This paper presents a parallel implementation framework to speed up a single dynamic simulation by leveraging the existing stability model librarymore » in commercial tools through their application programming interfaces (APIs). Several high performance computing (HPC) techniques are explored such as parallelizing the calculation of generator current injection, identifying fast linear solvers for network solution, and parallelizing data outputs when interacting with APIs in the commercial package, TSAT. The proposed method has been tested on a WECC planning base case with detailed synchronous generator models and exhibits outstanding scalable performance with sufficient accuracy.« less

  12. Bigdata Driven Cloud Security: A Survey

    NASA Astrophysics Data System (ADS)

    Raja, K.; Hanifa, Sabibullah Mohamed

    2017-08-01

    Cloud Computing (CC) is a fast-growing technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Recently, it has been observed that massive growth in the scale of data or big data generated through cloud computing. CC consists of a front-end, includes the users’ computers and software required to access the cloud network, and back-end consists of various computers, servers and database systems that create the cloud. In SaaS (Software as-a-Service - end users to utilize outsourced software), PaaS (Platform as-a-Service-platform is provided) and IaaS (Infrastructure as-a-Service-physical environment is outsourced), and DaaS (Database as-a-Service-data can be housed within a cloud), where leading / traditional cloud ecosystem delivers the cloud services become a powerful and popular architecture. Many challenges and issues are in security or threats, most vital barrier for cloud computing environment. The main barrier to the adoption of CC in health care relates to Data security. When placing and transmitting data using public networks, cyber attacks in any form are anticipated in CC. Hence, cloud service users need to understand the risk of data breaches and adoption of service delivery model during deployment. This survey deeply covers the CC security issues (covering Data Security in Health care) so as to researchers can develop the robust security application models using Big Data (BD) on CC (can be created / deployed easily). Since, BD evaluation is driven by fast-growing cloud-based applications developed using virtualized technologies. In this purview, MapReduce [12] is a good example of big data processing in a cloud environment, and a model for Cloud providers.

  13. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, D.; Alfonsi, A.; Talbot, P.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less

  14. Global simulation of the Czochralski silicon crystal growth in ANSYS FLUENT

    NASA Astrophysics Data System (ADS)

    Kirpo, Maksims

    2013-05-01

    Silicon crystals for high efficiency solar cells are produced mainly by the Czochralski (CZ) crystal growth method. Computer simulations of the CZ process established themselves as a basic tool for optimization of the growth process which allows to reduce production costs keeping high quality of the crystalline material. The author shows the application of the general Computational Fluid Dynamics (CFD) code ANSYS FLUENT to solution of the static two-dimensional (2D) axisymmetric global model of the small industrial furnace for growing of silicon crystals with a diameter of 100 mm. The presented numerical model is self-sufficient and incorporates the most important physical phenomena of the CZ growth process including latent heat generation during crystallization, crystal-melt interface deflection, turbulent heat and mass transport, oxygen transport, etc. The demonstrated approach allows to find the heater power for the specified pulling rate of the crystal but the obtained power values are smaller than those found in the literature for the studied furnace. However, the described approach is successfully verified with the respect to the heater power by its application for the numerical simulations of the real CZ pullers by "Bosch Solar Energy AG".

  15. TLBO based Voltage Stable Environment Friendly Economic Dispatch Considering Real and Reactive Power Constraints

    NASA Astrophysics Data System (ADS)

    Verma, H. K.; Mafidar, P.

    2013-09-01

    In view of growing concern towards environment, power system engineers are forced to generate quality green energy. Hence the economic dispatch (ED) aims at the power generation to meet the load demand at minimum fuel cost with environmental and voltage constraints along with essential constraints on real and reactive power. The emission control which reduces the negative impact on environment is achieved by including the additional constraints in ED problem. Presently, the power system mostly operates near its stability limits, therefore with increased demand the system faces voltage problem. The bus voltages are brought within limit in the present work by placement of static var compensator (SVC) at weak bus which is identified from bus participation factor. The optimal size of SVC is determined by univariate search method. This paper presents the use of Teaching Learning based Optimization (TLBO) algorithm for voltage stable environment friendly ED problem with real and reactive power constraints. The computational effectiveness of TLBO is established through test results over particle swarm optimization (PSO) and Big Bang-Big Crunch (BB-BC) algorithms for the ED problem.

  16. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  17. Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob

    2003-01-01

    The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.

  18. ChRIS--A web-based neuroimaging and informatics system for collecting, organizing, processing, visualizing and sharing of medical data.

    PubMed

    Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen

    2015-01-01

    The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.

  19. Addressing the challenges of standalone multi-core simulations in molecular dynamics

    NASA Astrophysics Data System (ADS)

    Ocaya, R. O.; Terblans, J. J.

    2017-07-01

    Computational modelling in material science involves mathematical abstractions of force fields between particles with the aim to postulate, develop and understand materials by simulation. The aggregated pairwise interactions of the material's particles lead to a deduction of its macroscopic behaviours. For practically meaningful macroscopic scales, a large amount of data are generated, leading to vast execution times. Simulation times of hours, days or weeks for moderately sized problems are not uncommon. The reduction of simulation times, improved result accuracy and the associated software and hardware engineering challenges are the main motivations for many of the ongoing researches in the computational sciences. This contribution is concerned mainly with simulations that can be done on a "standalone" computer based on Message Passing Interfaces (MPI), parallel code running on hardware platforms with wide specifications, such as single/multi- processor, multi-core machines with minimal reconfiguration for upward scaling of computational power. The widely available, documented and standardized MPI library provides this functionality through the MPI_Comm_size (), MPI_Comm_rank () and MPI_Reduce () functions. A survey of the literature shows that relatively little is written with respect to the efficient extraction of the inherent computational power in a cluster. In this work, we discuss the main avenues available to tap into this extra power without compromising computational accuracy. We also present methods to overcome the high inertia encountered in single-node-based computational molecular dynamics. We begin by surveying the current state of the art and discuss what it takes to achieve parallelism, efficiency and enhanced computational accuracy through program threads and message passing interfaces. Several code illustrations are given. The pros and cons of writing raw code as opposed to using heuristic, third-party code are also discussed. The growing trend towards graphical processor units and virtual computing clouds for high-performance computing is also discussed. Finally, we present the comparative results of vacancy formation energy calculations using our own parallelized standalone code called Verlet-Stormer velocity (VSV) operating on 30,000 copper atoms. The code is based on the Sutton-Chen implementation of the Finnis-Sinclair pairwise embedded atom potential. A link to the code is also given.

  20. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  1. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less

  2. COMSAC: Computational Methods for Stability and Control. Part 2

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

  3. Synthetic battery cycling techniques

    NASA Technical Reports Server (NTRS)

    Leibecki, H. F.; Thaller, L. H.

    1982-01-01

    Synthetic battery cycling makes use of the fast growing capability of computer graphics to illustrate some of the basic characteristics of operation of individual electrodes within an operating electrochemical cell. It can also simulate the operation of an entire string of cells that are used as the energy storage subsystem of a power system. The group of techniques that as a class have been referred to as Synthetic Battery Cycling is developed in part to try to bridge the gap of understanding that exists between single cell characteristics and battery system behavior.

  4. Structural and Computational Biology in the Design of Immunogenic Vaccine Antigens

    PubMed Central

    Liljeroos, Lassi; Malito, Enrico; Ferlenghi, Ilaria; Bottomley, Matthew James

    2015-01-01

    Vaccination is historically one of the most important medical interventions for the prevention of infectious disease. Previously, vaccines were typically made of rather crude mixtures of inactivated or attenuated causative agents. However, over the last 10–20 years, several important technological and computational advances have enabled major progress in the discovery and design of potently immunogenic recombinant protein vaccine antigens. Here we discuss three key breakthrough approaches that have potentiated structural and computational vaccine design. Firstly, genomic sciences gave birth to the field of reverse vaccinology, which has enabled the rapid computational identification of potential vaccine antigens. Secondly, major advances in structural biology, experimental epitope mapping, and computational epitope prediction have yielded molecular insights into the immunogenic determinants defining protective antigens, enabling their rational optimization. Thirdly, and most recently, computational approaches have been used to convert this wealth of structural and immunological information into the design of improved vaccine antigens. This review aims to illustrate the growing power of combining sequencing, structural and computational approaches, and we discuss how this may drive the design of novel immunogens suitable for future vaccines urgently needed to increase the global prevention of infectious disease. PMID:26526043

  5. Structural and Computational Biology in the Design of Immunogenic Vaccine Antigens.

    PubMed

    Liljeroos, Lassi; Malito, Enrico; Ferlenghi, Ilaria; Bottomley, Matthew James

    2015-01-01

    Vaccination is historically one of the most important medical interventions for the prevention of infectious disease. Previously, vaccines were typically made of rather crude mixtures of inactivated or attenuated causative agents. However, over the last 10-20 years, several important technological and computational advances have enabled major progress in the discovery and design of potently immunogenic recombinant protein vaccine antigens. Here we discuss three key breakthrough approaches that have potentiated structural and computational vaccine design. Firstly, genomic sciences gave birth to the field of reverse vaccinology, which has enabled the rapid computational identification of potential vaccine antigens. Secondly, major advances in structural biology, experimental epitope mapping, and computational epitope prediction have yielded molecular insights into the immunogenic determinants defining protective antigens, enabling their rational optimization. Thirdly, and most recently, computational approaches have been used to convert this wealth of structural and immunological information into the design of improved vaccine antigens. This review aims to illustrate the growing power of combining sequencing, structural and computational approaches, and we discuss how this may drive the design of novel immunogens suitable for future vaccines urgently needed to increase the global prevention of infectious disease.

  6. Multi-objective reverse logistics model for integrated computer waste management.

    PubMed

    Ahluwalia, Poonam Khanijo; Nema, Arvind K

    2006-12-01

    This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.

  7. Are X-rays the key to integrated computational materials engineering?

    DOE PAGES

    Ice, Gene E.

    2015-11-01

    The ultimate dream of materials science is to predict materials behavior from composition and processing history. Owing to the growing power of computers, this long-time dream has recently found expression through worldwide excitement in a number of computation-based thrusts: integrated computational materials engineering, materials by design, computational materials design, three-dimensional materials physics and mesoscale physics. However, real materials have important crystallographic structures at multiple length scales, which evolve during processing and in service. Moreover, real materials properties can depend on the extreme tails in their structural and chemical distributions. This makes it critical to map structural distributions with sufficient resolutionmore » to resolve small structures and with sufficient statistics to capture the tails of distributions. For two-dimensional materials, there are high-resolution nondestructive probes of surface and near-surface structures with atomic or near-atomic resolution that can provide detailed structural, chemical and functional distributions over important length scales. Furthermore, there are no nondestructive three-dimensional probes with atomic resolution over the multiple length scales needed to understand most materials.« less

  8. Grow--a computer subroutine that projects the growth of trees in the Lake States' forests.

    Treesearch

    Gary J. Brand

    1981-01-01

    A computer subroutine, Grow, has been written in 1977 Standard FORTRAN to implement a distance-independent, individual tree growth model for Lake States' forests. Grow is a small and easy-to-use version of the growth model. All the user has to do is write a calling program to read initial conditions, call Grow, and summarize the results.

  9. Differences in codon bias cannot explain differences in translational power among microbes.

    PubMed

    Dethlefsen, Les; Schmidt, Thomas M

    2005-01-06

    Translational power is the cellular rate of protein synthesis normalized to the biomass invested in translational machinery. Published data suggest a previously unrecognized pattern: translational power is higher among rapidly growing microbes, and lower among slowly growing microbes. One factor known to affect translational power is biased use of synonymous codons. The correlation within an organism between expression level and degree of codon bias among genes of Escherichia coli and other bacteria capable of rapid growth is commonly attributed to selection for high translational power. Conversely, the absence of such a correlation in some slowly growing microbes has been interpreted as the absence of selection for translational power. Because codon bias caused by translational selection varies between rapidly growing and slowly growing microbes, we investigated whether observed differences in translational power among microbes could be explained entirely by differences in the degree of codon bias. Although the data are not available to estimate the effect of codon bias in other species, we developed an empirically-based mathematical model to compare the translation rate of E. coli to the translation rate of a hypothetical strain which differs from E. coli only by lacking codon bias. Our reanalysis of data from the scientific literature suggests that translational power can differ by a factor of 5 or more between E. coli and slowly growing microbial species. Using empirical codon-specific in vivo translation rates for 29 codons, and several scenarios for extrapolating from these data to estimates over all codons, we find that codon bias cannot account for more than a doubling of the translation rate in E. coli, even with unrealistic simplifying assumptions that exaggerate the effect of codon bias. With more realistic assumptions, our best estimate is that codon bias accelerates translation in E. coli by no more than 60% in comparison to microbes with very little codon bias. While codon bias confers a substantial benefit of faster translation and hence greater translational power, the magnitude of this effect is insufficient to explain observed differences in translational power among bacterial and archaeal species, particularly the differences between slowly growing and rapidly growing species. Hence, large differences in translational power suggest that the translational apparatus itself differs among microbes in ways that influence translational performance.

  10. Market survey of fuel cells in Mexico: Niche for low power portable systems

    NASA Astrophysics Data System (ADS)

    Ramírez-Salgado, Joel; Domínguez-Aguilar, Marco A.

    This work provides an overview of the potential market in Mexico for portable electronic devices to be potentially powered by direct methanol fuel cells. An extrapolation method based on data published in Mexico and abroad served to complete this market survey. A review of electronics consumption set the basis for the future forecast and technology assimilation. The potential market for fuel cells for mobile phones in Mexico will be around 5.5 billion USD by 2013, considering a cost of 41 USD per cell in a market of 135 million mobile phones. Likewise, the market for notebook computers, PDAs and other electronic devices will likely grow in the future, with a combined consumption of fuel cell technology equivalent to 1.6 billion USD by 2014.

  11. Sustainable mobile information infrastructures in low resource settings.

    PubMed

    Braa, Kristin; Purkayastha, Saptarshi

    2010-01-01

    Developing countries represent the fastest growing mobile markets in the world. For people with no computing access, a mobile will be their first computing device. Mobile technologies offer a significant potential to strengthen health systems in developing countries with respect to community based monitoring, reporting, feedback to service providers, and strengthening communication and coordination between different health functionaries, medical officers and the community. However, there are various challenges in realizing this potential including technological such as lack of power, social, institutional and use issues. In this paper a case study from India on mobile health implementation and use will be reported. An underlying principle guiding this paper is to see mobile technology not as a "stand alone device" but potentially an integral component of an integrated mobile supported health information infrastructure.

  12. Near East/South Asia Report

    DTIC Science & Technology

    1985-06-13

    vegetables, we find that there is the pressure of purchasing power on the part of the Egyptian citizen and this pressure has extended to all goods in a...happened- is that the supply is available and has been increasing year after year. But the purchase power has also been growing more rapidly. For...example, if production grows by 5 percent a year, we find that the purchasing power and the demand are growing by 10 percent a year. This is the

  13. Turbomachinery

    NASA Technical Reports Server (NTRS)

    Simoneau, Robert J.; Strazisar, Anthony J.; Sockol, Peter M.; Reid, Lonnie; Adamczyk, John J.

    1987-01-01

    The discipline research in turbomachinery, which is directed toward building the tools needed to understand such a complex flow phenomenon, is based on the fact that flow in turbomachinery is fundamentally unsteady or time dependent. Success in building a reliable inventory of analytic and experimental tools will depend on how the time and time-averages are treated, as well as on who the space and space-averages are treated. The raw tools at disposal (both experimentally and computational) are truly powerful and their numbers are growing at a staggering pace. As a result of this power, a case can be made that a situation exists where information is outstripping understanding. The challenge is to develop a set of computational and experimental tools which genuinely increase understanding of the fluid flow and heat transfer in a turbomachine. Viewgraphs outline a philosophy based on working on a stairstep hierarchy of mathematical and experimental complexity to build a system of tools, which enable one to aggressively design the turbomachinery of the next century. Examples of the types of computational and experimental tools under current development at Lewis, with progress to date, are examined. The examples include work in both the time-resolved and time-averaged domains. Finally, an attempt is made to identify the proper place for Lewis in this continuum of research.

  14. Exact Synthesis of Reversible Circuits Using A* Algorithm

    NASA Astrophysics Data System (ADS)

    Datta, K.; Rathi, G. K.; Sengupta, I.; Rahaman, H.

    2015-06-01

    With the growing emphasis on low-power design methodologies, and the result that theoretical zero power dissipation is possible only if computations are information lossless, design and synthesis of reversible logic circuits have become very important in recent years. Reversible logic circuits are also important in the context of quantum computing, where the basic operations are reversible in nature. Several synthesis methodologies for reversible circuits have been reported. Some of these methods are termed as exact, where the motivation is to get the minimum-gate realization for a given reversible function. These methods are computationally very intensive, and are able to synthesize only very small functions. There are other methods based on function transformations or higher-level representation of functions like binary decision diagrams or exclusive-or sum-of-products, that are able to handle much larger circuits without any guarantee of optimality or near-optimality. Design of exact synthesis algorithms is interesting in this context, because they set some kind of benchmarks against which other methods can be compared. This paper proposes an exact synthesis approach based on an iterative deepening version of the A* algorithm using the multiple-control Toffoli gate library. Experimental results are presented with comparisons with other exact and some heuristic based synthesis approaches.

  15. Sub-Second Parallel State Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.

    This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly statemore » estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.« less

  16. Home Computer Use and Academic Performance of Nine-Year-Olds

    ERIC Educational Resources Information Center

    Casey, Alice; Layte, Richard; Lyons, Sean; Silles, Mary

    2012-01-01

    A recent rise in home computer ownership has seen a growing number of children using computers and accessing the internet from a younger age. This paper examines the link between children's home computing and their academic performance in the areas of reading and mathematics. Data from the nine-year-old cohort of the Growing Up in Ireland survey…

  17. Multiscale Cancer Modeling

    PubMed Central

    Macklin, Paul; Cristini, Vittorio

    2013-01-01

    Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insight on the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community. PMID:21529163

  18. Power system distributed oscilation detection based on Synchrophasor data

    NASA Astrophysics Data System (ADS)

    Ning, Jiawei

    Along with increasing demand for electricity, integration of renewable energy and deregulation of power market, power industry is facing unprecedented challenges nowadays. Within the last couple of decades, several serious blackouts have been taking place in United States. As an effective approach to prevent that, power system small signal stability monitoring has been drawing more interests and attentions from researchers. With wide-spread implementation of Synchrophasors around the world in the last decade, power systems real-time online monitoring becomes much more feasible. Comparing with planning study analysis, real-time online monitoring would benefit control room operators immediately and directly. Among all online monitoring methods, Oscillation Modal Analysis (OMA), a modal identification method based on routine measurement data where the input is unmeasured ambient excitation, is a great tool to evaluate and monitor power system small signal stability. Indeed, high sampling Synchrophasor data around power system is fitted perfectly as inputs to OMA. Existing methods in OMA for power systems are all based on centralized algorithms applying at control centers only; however, with rapid growing number of online Synchrophasors the computation burden at control centers is and will be continually exponentially expanded. The increasing computation time at control center compromises the real-time feature of online monitoring. The communication efforts between substation and control center will also be out of reach. Meanwhile, it is difficult or even impossible for centralized algorithms to detect some poorly damped local modes. In order to avert previous shortcomings of centralized OMA methods and embrace the new changes in the power systems, two new distributed oscillation detection methods with two new decentralized structures are presented in this dissertation. Since the new schemes brought substations into the big oscillation detection picture, the proposed methods could achieve faster and more reliable results. Subsequently, this claim is tested and approved by test results of IEEE Two-area simulation test system and real power system historian synchrophasor data case studies.

  19. Nowhere to Hide: The Growing Threat to Air Bases

    DTIC Science & Technology

    2013-06-01

    May–June 2013 Air & Space Power Journal | 30 Feature Nowhere to Hide The Growing Threat to Air Bases Col Shannon W. Caudill, USAF Maj Benjamin R...May–June 2013 Air & Space Power Journal | 33 Caudill & Jacobson Nowhere to Hide Feature The Growing Precision of Indirect Fire IDF has become the...personnel.22 More troubling still is the growing threat from within the ranks of American personnel. On 11 May 2009, five American military mem- bers

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darrow, Ken; Hedman, Bruce

    Data centers represent a rapidly growing and very energy intensive activity in commercial, educational, and government facilities. In the last five years the growth of this sector was the electric power equivalent to seven new coal-fired power plants. Data centers consume 1.5% of the total power in the U.S. Growth over the next five to ten years is expected to require a similar increase in power generation. This energy consumption is concentrated in buildings that are 10-40 times more energy intensive than a typical office building. The sheer size of the market, the concentrated energy consumption per facility, and themore » tendency of facilities to cluster in 'high-tech' centers all contribute to a potential power infrastructure crisis for the industry. Meeting the energy needs of data centers is a moving target. Computing power is advancing rapidly, which reduces the energy requirements for data centers. A lot of work is going into improving the computing power of servers and other processing equipment. However, this increase in computing power is increasing the power densities of this equipment. While fewer pieces of equipment may be needed to meet a given data processing load, the energy density of a facility designed to house this higher efficiency equipment will be as high as or higher than it is today. In other words, while the data center of the future may have the IT power of ten data centers of today, it is also going to have higher power requirements and higher power densities. This report analyzes the opportunities for CHP technologies to assist primary power in making the data center more cost-effective and energy efficient. Broader application of CHP will lower the demand for electricity from central stations and reduce the pressure on electric transmission and distribution infrastructure. This report is organized into the following sections: (1) Data Center Market Segmentation--the description of the overall size of the market, the size and types of facilities involved, and the geographic distribution. (2) Data Center Energy Use Trends--a discussion of energy use and expected energy growth and the typical energy consumption and uses in data centers. (3) CHP Applicability--Potential configurations, CHP case studies, applicable equipment, heat recovery opportunities (cooling), cost and performance benchmarks, and power reliability benefits (4) CHP Drivers and Hurdles--evaluation of user benefits, social benefits, market structural issues and attitudes toward CHP, and regulatory hurdles. (5) CHP Paths to Market--Discussion of technical needs, education, strategic partnerships needed to promote CHP in the IT community.« less

  1. Algorithms for classification of astronomical object spectra

    NASA Astrophysics Data System (ADS)

    Wasiewicz, P.; Szuppe, J.; Hryniewicz, K.

    2015-09-01

    Obtaining interesting celestial objects from tens of thousands or even millions of recorded optical-ultraviolet spectra depends not only on the data quality but also on the accuracy of spectra decomposition. Additionally rapidly growing data volumes demands higher computing power and/or more efficient algorithms implementations. In this paper we speed up the process of substracting iron transitions and fitting Gaussian functions to emission peaks utilising C++ and OpenCL methods together with the NOSQL database. In this paper we implemented typical astronomical methods of detecting peaks in comparison to our previous hybrid methods implemented with CUDA.

  2. Active Flash: Out-of-core Data Analytics on Flash Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less

  3. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  4. P2P Technology for High-Performance Computing: An Overview

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J. (Technical Monitor); Berry, Jason

    2003-01-01

    The transition from cluster computing to peer-to-peer (P2P) high-performance computing has recently attracted the attention of the computer science community. It has been recognized that existing local networks and dedicated clusters of headless workstations can serve as inexpensive yet powerful virtual supercomputers. It has also been recognized that the vast number of lower-end computers connected to the Internet stay idle for as long as 90% of the time. The growing speed of Internet connections and the high availability of free CPU time encourage exploration of the possibility to use the whole Internet rather than local clusters as massively parallel yet almost freely available P2P supercomputer. As a part of a larger project on P2P high-performance computing, it has been my goal to compile an overview of the 2P2 paradigm. I have studied various P2P platforms and I have compiled systematic brief descriptions of their most important characteristics. I have also experimented and obtained hands-on experience with selected P2P platforms focusing on those that seem promising with respect to P2P high-performance computing. I have also compiled relevant literature and web references. I have prepared a draft technical report and I have summarized my findings in a poster paper.

  5. On agent-based modeling and computational social science.

    PubMed

    Conte, Rosaria; Paolucci, Mario

    2014-01-01

    In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS.

  6. On agent-based modeling and computational social science

    PubMed Central

    Conte, Rosaria; Paolucci, Mario

    2014-01-01

    In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS. PMID:25071642

  7. HGML: a hypertext guideline markup language.

    PubMed Central

    Hagerty, C. G.; Pickens, D.; Kulikowski, C.; Sonnenberg, F.

    2000-01-01

    Existing text-based clinical practice guidelines can be difficult to put into practice. While a growing number of such documents have gained acceptance in the medical community and contain a wealth of valuable information, the time required to digest them is substantial. Yet the expressive power, subtlety and flexibility of natural language pose challenges when designing computer tools that will help in their application. At the same time, formal computer languages typically lack such expressiveness and the effort required to translate existing documents into these languages may be costly. We propose a method based on the mark-up concept for converting text-based clinical guidelines into a machine-operable form. This allows existing guidelines to be manipulated by machine, and viewed in different formats at various levels of detail according to the needs of the practitioner, while preserving their originally published form. PMID:11079898

  8. Sparse approximation of currents for statistics on curves and surfaces.

    PubMed

    Durrleman, Stanley; Pennec, Xavier; Trouvé, Alain; Ayache, Nicholas

    2008-01-01

    Computing, processing, visualizing statistics on shapes like curves or surfaces is a real challenge with many applications ranging from medical image analysis to computational geometry. Modelling such geometrical primitives with currents avoids feature-based approach as well as point-correspondence method. This framework has been proved to be powerful to register brain surfaces or to measure geometrical invariants. However, if the state-of-the-art methods perform efficiently pairwise registrations, new numerical schemes are required to process groupwise statistics due to an increasing complexity when the size of the database is growing. Statistics such as mean and principal modes of a set of shapes often have a heavy and highly redundant representation. We propose therefore to find an adapted basis on which mean and principal modes have a sparse decomposition. Besides the computational improvement, this sparse representation offers a way to visualize and interpret statistics on currents. Experiments show the relevance of the approach on 34 sets of 70 sulcal lines and on 50 sets of 10 meshes of deep brain structures.

  9. Materials Databases Infrastructure Constructed by First Principles Calculations: A Review

    DOE PAGES

    Lin, Lianshan

    2015-10-13

    The First Principles calculations, especially the calculation based on High-Throughput Density Functional Theory, have been widely accepted as the major tools in atom scale materials design. The emerging super computers, along with the powerful First Principles calculations, have accumulated hundreds of thousands of crystal and compound records. The exponential growing of computational materials information urges the development of the materials databases, which not only provide unlimited storage for the daily increasing data, but still keep the efficiency in data storage, management, query, presentation and manipulation. This review covers the most cutting edge materials databases in materials design, and their hotmore » applications such as in fuel cells. By comparing the advantages and drawbacks of these high-throughput First Principles materials databases, the optimized computational framework can be identified to fit the needs of fuel cell applications. The further development of high-throughput DFT materials database, which in essence accelerates the materials innovation, is discussed in the summary as well.« less

  10. Study Neutronic of Small Pb-Bi Cooled Non-Refuelling Nuclear Power Plant Reactor (SPINNOR) with Hexagonal Geometry Calculation

    NASA Astrophysics Data System (ADS)

    Nur Krisna, Dwita; Su'ud, Zaki

    2017-01-01

    Nuclear reactor technology is growing rapidly, especially in developing Nuclear Power Plant (NPP). The utilization of nuclear energy in power generation systems has been progressing phase of the first generation to the fourth generation. This final project paper discusses the analysis neutronic one-cooled fast reactor type Pb-Bi, which is capable of operating up to 20 years without refueling. This reactor uses Thorium Uranium Nitride as fuel and operating on power range 100-500MWtNPPs. The method of calculation used a computer simulation program utilizing the SRAC. SPINNOR reactor is designed with the geometry of hexagonal shaped terrace that radially divided into three regions, namely the outermost regions with highest percentage of fuel, the middle regions with medium percentage of fuel, and most in the area with the lowest percentage. SPINNOR fast reactor operated for 20 years with variations in the percentage of Uranium-233 by 7%, 7.75%, and 8.5%. The neutronic calculation and analysis show that the design can be optimized in a fast reactor for thermal power output SPINNOR 300MWt with a fuel fraction 60% and variations of Uranium-233 enrichment of 7%-8.5%.

  11. Variogram Analysis of Response surfaces (VARS): A New Framework for Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2015-12-01

    Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  12. Teaching Sustainable Energy and Power Electronics to Engineering Students in a Laboratory Environment Using Industry-Standard Tools

    ERIC Educational Resources Information Center

    Ochs, David S.; Miller, Ruth Douglas

    2015-01-01

    Power electronics and renewable energy are two important topics for today's power engineering students. In many cases, the two topics are inextricably intertwined. As the renewable energy sector grows, the need for engineers qualified to design such systems grows as well. In order to train such engineers, new courses are needed that highlight the…

  13. Theoretical Heterogeneous Catalysis: Scaling Relationships and Computational Catalyst Design.

    PubMed

    Greeley, Jeffrey

    2016-06-07

    Scaling relationships are theoretical constructs that relate the binding energies of a wide variety of catalytic intermediates across a range of catalyst surfaces. Such relationships are ultimately derived from bond order conservation principles that were first introduced several decades ago. Through the growing power of computational surface science and catalysis, these concepts and their applications have recently begun to have a major impact in studies of catalytic reactivity and heterogeneous catalyst design. In this review, the detailed theory behind scaling relationships is discussed, and the existence of these relationships for catalytic materials ranging from pure metal to oxide surfaces, for numerous classes of molecules, and for a variety of catalytic surface structures is described. The use of the relationships to understand and elucidate reactivity trends across wide classes of catalytic surfaces and, in some cases, to predict optimal catalysts for certain chemical reactions, is explored. Finally, the observation that, in spite of the tremendous power of scaling relationships, their very existence places limits on the maximum rates that may be obtained for the catalyst classes in question is discussed, and promising strategies are explored to overcome these limitations to usher in a new era of theory-driven catalyst design.

  14. Scale-free Graphs for General Aviation Flight Schedules

    NASA Technical Reports Server (NTRS)

    Alexandov, Natalia M. (Technical Monitor); Kincaid, Rex K.

    2003-01-01

    In the late 1990s a number of researchers noticed that networks in biology, sociology, and telecommunications exhibited similar characteristics unlike standard random networks. In particular, they found that the cummulative degree distributions of these graphs followed a power law rather than a binomial distribution and that their clustering coefficients tended to a nonzero constant as the number of nodes, n, became large rather than O(1/n). Moreover, these networks shared an important property with traditional random graphs as n becomes large the average shortest path length scales with log n. This latter property has been coined the small-world property. When taken together these three properties small-world, power law, and constant clustering coefficient describe what are now most commonly referred to as scale-free networks. Since 1997 at least six books and over 400 articles have been written about scale-free networks. In this manuscript an overview of the salient characteristics of scale-free networks. Computational experience will be provided for two mechanisms that grow (dynamic) scale-free graphs. Additional computational experience will be given for constructing (static) scale-free graphs via a tabu search optimization approach. Finally, a discussion of potential applications to general aviation networks is given.

  15. Nurturing a growing field: Computers & Geosciences

    NASA Astrophysics Data System (ADS)

    Mariethoz, Gregoire; Pebesma, Edzer

    2017-10-01

    Computational issues are becoming increasingly critical for virtually all fields of geoscience. This includes the development of improved algorithms and models, strategies for implementing high-performance computing, or the management and visualization of the large datasets provided by an ever-growing number of environmental sensors. Such issues are central to scientific fields as diverse as geological modeling, Earth observation, geophysics or climatology, to name just a few. Related computational advances, across a range of geoscience disciplines, are the core focus of Computers & Geosciences, which is thus a truly multidisciplinary journal.

  16. Profiling an application for power consumption during execution on a compute node

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-09-17

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energymore » assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.« less

  18. Clustering of low-valence particles: structure and kinetics.

    PubMed

    Markova, Olga; Alberts, Jonathan; Munro, Edwin; Lenne, Pierre-François

    2014-08-01

    We compute the structure and kinetics of two systems of low-valence particles with three or six freely oriented bonds in two dimensions. The structure of clusters formed by trivalent particles is complex with loops and holes, while hexavalent particles self-organize into regular and compact structures. We identify the elementary structures which compose the clusters of trivalent particles. At initial stages of clustering, the clusters of trivalent particles grow with a power-law time dependence. Yet at longer times fusion and fission of clusters equilibrates and clusters form a heterogeneous phase with polydispersed sizes. These results emphasize the role of valence in the kinetics and stability of finite-size clusters.

  19. Molecular Dynamics Simulations of Nucleic Acids. From Tetranucleotides to the Ribosome.

    PubMed

    Šponer, Jiří; Banáš, Pavel; Jurečka, Petr; Zgarbová, Marie; Kührová, Petra; Havrila, Marek; Krepl, Miroslav; Stadlbauer, Petr; Otyepka, Michal

    2014-05-15

    We present a brief overview of explicit solvent molecular dynamics (MD) simulations of nucleic acids. We explain physical chemistry limitations of the simulations, namely, the molecular mechanics (MM) force field (FF) approximation and limited time scale. Further, we discuss relations and differences between simulations and experiments, compare standard and enhanced sampling simulations, discuss the role of starting structures, comment on different versions of nucleic acid FFs, and relate MM computations with contemporary quantum chemistry. Despite its limitations, we show that MD is a powerful technique for studying the structural dynamics of nucleic acids with a fast growing potential that substantially complements experimental results and aids their interpretation.

  20. Nanoscale light–matter interactions in atomic cladding waveguides

    PubMed Central

    Stern, Liron; Desiatov, Boris; Goykhman, Ilya; Levy, Uriel

    2013-01-01

    Alkali vapours, such as rubidium, are being used extensively in several important fields of research such as slow and stored light nonlinear optics quantum computation, atomic clocks and magnetometers. Recently, there is a growing effort towards miniaturizing traditional centimetre-size vapour cells. Owing to the significant reduction in device dimensions, light–matter interactions are greatly enhanced, enabling new functionalities due to the low power threshold needed for nonlinear interactions. Here, taking advantage of the mature platform of silicon photonics, we construct an efficient and flexible platform for tailored light–vapour interactions on a chip. Specifically, we demonstrate light–matter interactions in an atomic cladding waveguide, consisting of a silicon nitride nano-waveguide core with a rubidium vapour cladding. We observe the efficient interaction of the electromagnetic guided mode with the rubidium cladding and show that due to the high confinement of the optical mode, the rubidium absorption saturates at powers in the nanowatt regime. PMID:23462991

  1. Feasibility of energy harvesting techniques for wearable medical devices.

    PubMed

    Voss, Thaddaeus J; Subbian, Vignesh; Beyette, Fred R

    2014-01-01

    Wearable devices are arguably one of the most rapidly growing technologies in the computing and health care industry. These systems provide improved means of monitoring health status of humans in real-time. In order to cope with continuous sensing and transmission of biological and health status data, it is desirable to move towards energy autonomous systems that can charge batteries using passive, ambient energy. This not only ensures uninterrupted data capturing, but could also eliminate the need to frequently remove, replace, and recharge batteries. To this end, energy harvesting is a promising area that can lead to extremely power-efficient portable medical devices. This paper presents an experimental prototype to study the feasibility of harvesting two energy sources, solar and thermoelectric energy, in the context of wearable devices. Preliminary results show that such devices can be powered by transducing ambient energy that constantly surrounds us.

  2. Perspectives in numerical astrophysics:

    NASA Astrophysics Data System (ADS)

    Reverdy, V.

    2016-12-01

    In this discussion paper, we investigate the current and future status of numerical astrophysics and highlight key questions concerning the transition to the exascale era. We first discuss the fact that one of the main motivation behind high performance simulations should not be the reproduction of observational or experimental data, but the understanding of the emergence of complexity from fundamental laws. This motivation is put into perspective regarding the quest for more computational power and we argue that extra computational resources can be used to gain in abstraction. Then, the readiness level of present-day simulation codes in regard to upcoming exascale architecture is examined and two major challenges are raised concerning both the central role of data movement for performances and the growing complexity of codes. Software architecture is finally presented as a key component to make the most of upcoming architectures while solving original physics problems.

  3. Digital Textbooks. Research Brief

    ERIC Educational Resources Information Center

    Johnston, Howard

    2011-01-01

    Despite their growing popularity, digital alternatives to conventional textbooks are stirring up controversy. With the introduction of tablet computers, and the growing trend toward "cloud computing" and "open source" software, the trend is accelerating because costs are coming down and free or inexpensive materials are becoming more available.…

  4. Energy efficiency analysis and optimization for mobile platforms

    NASA Astrophysics Data System (ADS)

    Metri, Grace Camille

    The introduction of mobile devices changed the landscape of computing. Gradually, these devices are replacing traditional personal computer (PCs) to become the devices of choice for entertainment, connectivity, and productivity. There are currently at least 45.5 million people in the United States who own a mobile device, and that number is expected to increase to 1.5 billion by 2015. Users of mobile devices expect and mandate that their mobile devices have maximized performance while consuming minimal possible power. However, due to the battery size constraints, the amount of energy stored in these devices is limited and is only growing by 5% annually. As a result, we focused in this dissertation on energy efficiency analysis and optimization for mobile platforms. We specifically developed SoftPowerMon, a tool that can power profile Android platforms in order to expose the power consumption behavior of the CPU. We also performed an extensive set of case studies in order to determine energy inefficiencies of mobile applications. Through our case studies, we were able to propose optimization techniques in order to increase the energy efficiency of mobile devices and proposed guidelines for energy-efficient application development. In addition, we developed BatteryExtender, an adaptive user-guided tool for power management of mobile devices. The tool enables users to extend battery life on demand for a specific duration until a particular task is completed. Moreover, we examined the power consumption of System-on-Chips (SoCs) and observed the impact on the energy efficiency in the event of offloading tasks from the CPU to the specialized custom engines. Based on our case studies, we were able to demonstrate that current software-based power profiling techniques for SoCs can have an error rate close to 12%, which needs to be addressed in order to be able to optimize the energy consumption of the SoC. Finally, we summarize our contributions and outline possible direction for future research in this field.

  5. Profiling an application for power consumption during execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-08-21

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  6. High Performance Computing for Modeling Wind Farms and Their Impact

    NASA Astrophysics Data System (ADS)

    Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.

    2016-12-01

    As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.

  7. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less

  8. Advancing botnet modeling techniques for military and security simulations

    NASA Astrophysics Data System (ADS)

    Banks, Sheila B.; Stytz, Martin R.

    2011-06-01

    Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.

  9. CUDA GPU based full-Stokes finite difference modelling of glaciers

    NASA Astrophysics Data System (ADS)

    Brædstrup, C. F.; Egholm, D. L.

    2012-04-01

    Many have stressed the limitations of using the shallow shelf and shallow ice approximations when modelling ice streams or surging glaciers. Using a full-stokes approach requires either large amounts of computer power or time and is therefore seldom an option for most glaciologists. Recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists. Our full-stokes ice sheet model implements a Red-Black Gauss-Seidel iterative linear solver to solve the full stokes equations. This technique has proven very effective when applied to the stokes equation in geodynamics problems, and should therefore also preform well in glaciological flow probems. The Gauss-Seidel iterator is known to be robust but several other linear solvers have a much faster convergence. To aid convergence, the solver uses a multigrid approach where values are interpolated and extrapolated between different grid resolutions to minimize the short wavelength errors efficiently. This reduces the iteration count by several orders of magnitude. The run-time is further reduced by using the GPGPU technology where each card has up to 448 cores. Researchers utilizing the GPGPU technique in other areas have reported between 2 - 11 times speedup compared to multicore CPU implementations on similar problems. The goal of these initial investigations into the possible usage of GPGPU technology in glacial modelling is to apply the enhanced resolution of a full-stokes solver to ice streams and surging glaciers. This is a area of growing interest because ice streams are the main drainage conjugates for large ice sheets. It is therefore crucial to understand this streaming behavior and it's impact up-ice.

  10. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  11. Toward Petascale Biologically Plausible Neural Networks

    NASA Astrophysics Data System (ADS)

    Long, Lyle

    This talk will describe an approach to achieving petascale neural networks. Artificial intelligence has been oversold for many decades. Computers in the beginning could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second). The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization massive scale simulations can be performed. The code that will be described was written in C + + and uses the Message Passing Interface (MPI). It uses the full Hodgkin-Huxley neuron model, not a simplified model. It also allows arbitrary network structures (deep, recurrent, convolutional, all-to-all, etc.). The code is scalable, and has, so far, been tested on up to 2,048 processor cores using 107 neurons and 109 synapses.

  12. Object-oriented design and implementation of CFDLab: a computer-assisted learning tool for fluid dynamics using dual reciprocity boundary element methodology

    NASA Astrophysics Data System (ADS)

    Friedrich, J.

    1999-08-01

    As lecturers, our main concern and goal is to develop more attractive and efficient ways of communicating up-to-date scientific knowledge to our students and facilitate an in-depth understanding of physical phenomena. Computer-based instruction is very promising to help both teachers and learners in their difficult task, which involves complex cognitive psychological processes. This complexity is reflected in high demands on the design and implementation methods used to create computer-assisted learning (CAL) programs. Due to their concepts, flexibility, maintainability and extended library resources, object-oriented modeling techniques are very suitable to produce this type of pedagogical tool. Computational fluid dynamics (CFD) enjoys not only a growing importance in today's research, but is also very powerful for teaching and learning fluid dynamics. For this purpose, an educational PC program for university level called 'CFDLab 1.1' for Windows™ was developed with an interactive graphical user interface (GUI) for multitasking and point-and-click operations. It uses the dual reciprocity boundary element method as a versatile numerical scheme, allowing to handle a variety of relevant governing equations in two dimensions on personal computers due to its simple pre- and postprocessing including 2D Laplace, Poisson, diffusion, transient convection-diffusion.

  13. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    PubMed

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  14. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing

    PubMed Central

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  15. Overview of deep learning in medical imaging.

    PubMed

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.

  16. Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2002-01-01

    This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.

  17. Pilot evaluation of electricity-reliability and power-quality monitoring in California's Silicon Valley with the I-Grid(R) system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph; Divan, Deepak; Brumsickle, William

    2004-02-01

    Power-quality events are of increasing concern for the economy because today's equipment, particularly computers and automated manufacturing devices, is susceptible to these imperceptible voltage changes. A small variation in voltage can cause this equipment to shut down for long periods, resulting in significant business losses. Tiny variations in power quality are difficult to detect except with expensive monitoring equipment used by trained technicians, so many electricity customers are unaware of the role of power-quality events in equipment malfunctioning. This report describes the findings from a pilot study coordinated through the Silicon Valley Manufacturers Group in California to explore the capabilitiesmore » of I-Grid(R), a new power-quality monitoring system. This system is designed to improve the accessibility of power-quality in formation and to increase understanding of the growing importance of electricity reliability and power quality to the economy. The study used data collected by I-Grid sensors at seven Silicon Valley firms to investigate the impacts of power quality on individual study participants as well as to explore the capabilities of the I-Grid system to detect events on the larger electricity grid by means of correlation of data from the sensors at the different sites. In addition, study participants were interviewed about the value they place on power quality, and their efforts to address electricity-reliability and power-quality problems. Issues were identified that should be taken into consideration in developing a larger, potentially nationwide, network of power-quality sensors.« less

  18. A Low Complexity System Based on Multiple Weighted Decision Trees for Indoor Localization

    PubMed Central

    Sánchez-Rodríguez, David; Hernández-Morera, Pablo; Quinteiro, José Ma.; Alonso-González, Itziar

    2015-01-01

    Indoor position estimation has become an attractive research topic due to growing interest in location-aware services. Nevertheless, satisfying solutions have not been found with the considerations of both accuracy and system complexity. From the perspective of lightweight mobile devices, they are extremely important characteristics, because both the processor power and energy availability are limited. Hence, an indoor localization system with high computational complexity can cause complete battery drain within a few hours. In our research, we use a data mining technique named boosting to develop a localization system based on multiple weighted decision trees to predict the device location, since it has high accuracy and low computational complexity. The localization system is built using a dataset from sensor fusion, which combines the strength of radio signals from different wireless local area network access points and device orientation information from a digital compass built-in mobile device, so that extra sensors are unnecessary. Experimental results indicate that the proposed system leads to substantial improvements on computational complexity over the widely-used traditional fingerprinting methods, and it has a better accuracy than they have. PMID:26110413

  19. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-06-05

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.

  20. Event triggered state estimation techniques for power systems with integrated variable energy resources.

    PubMed

    Francy, Reshma C; Farid, Amro M; Youcef-Toumi, Kamal

    2015-05-01

    For many decades, state estimation (SE) has been a critical technology for energy management systems utilized by power system operators. Over time, it has become a mature technology that provides an accurate representation of system state under fairly stable and well understood system operation. The integration of variable energy resources (VERs) such as wind and solar generation, however, introduces new fast frequency dynamics and uncertainties into the system. Furthermore, such renewable energy is often integrated into the distribution system thus requiring real-time monitoring all the way to the periphery of the power grid topology and not just the (central) transmission system. The conventional solution is two fold: solve the SE problem (1) at a faster rate in accordance with the newly added VER dynamics and (2) for the entire power grid topology including the transmission and distribution systems. Such an approach results in exponentially growing problem sets which need to be solver at faster rates. This work seeks to address these two simultaneous requirements and builds upon two recent SE methods which incorporate event-triggering such that the state estimator is only called in the case of considerable novelty in the evolution of the system state. The first method incorporates only event-triggering while the second adds the concept of tracking. Both SE methods are demonstrated on the standard IEEE 14-bus system and the results are observed for a specific bus for two difference scenarios: (1) a spike in the wind power injection and (2) ramp events with higher variability. Relative to traditional state estimation, the numerical case studies showed that the proposed methods can result in computational time reductions of 90%. These results were supported by a theoretical discussion of the computational complexity of three SE techniques. The work concludes that the proposed SE techniques demonstrate practical improvements to the computational complexity of classical state estimation. In such a way, state estimation can continue to support the necessary control actions to mitigate the imbalances resulting from the uncertainties in renewables. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Wireless Monitoring of Induction Machine Rotor Physical Variables

    PubMed Central

    Doolan Fernandes, Jefferson; Carvalho Souza, Francisco Elvis; de Paiva, José Alvaro

    2017-01-01

    With the widespread use of electric machines, there is a growing need to extract information from the machines to improve their control systems and maintenance management. The present work shows the development of an embedded system to perform the monitoring of the rotor physical variables of a squirrel cage induction motor. The system is comprised of: a circuit to acquire desirable rotor variable(s) and value(s) that send it to the computer; a rectifier and power storage circuit that converts an alternating current in a continuous current but also stores energy for a certain amount of time to wait for the motor’s shutdown; and a magnetic generator that harvests energy from the rotating field to power the circuits mentioned above. The embedded system is set on the rotor of a 5 HP squirrel cage induction motor, making it difficult to power the system because it is rotating. This problem can be solved with the construction of a magnetic generator device to avoid the need of using batteries or collector rings and will send data to the computer using a wireless NRF24L01 module. For the proposed system, initial validation tests were made using a temperature sensor (DS18b20), as this variable is known as the most important when identifying the need for maintenance and control systems. Few tests have shown promising results that, with further improvements, can prove the feasibility of using sensors in the rotor. PMID:29156564

  2. Wireless Monitoring of Induction Machine Rotor Physical Variables.

    PubMed

    Doolan Fernandes, Jefferson; Carvalho Souza, Francisco Elvis; Cipriano Maniçoba, Glauco George; Salazar, Andrés Ortiz; de Paiva, José Alvaro

    2017-11-18

    With the widespread use of electric machines, there is a growing need to extract information from the machines to improve their control systems and maintenance management. The present work shows the development of an embedded system to perform the monitoring of the rotor physical variables of a squirrel cage induction motor. The system is comprised of: a circuit to acquire desirable rotor variable(s) and value(s) that send it to the computer; a rectifier and power storage circuit that converts an alternating current in a continuous current but also stores energy for a certain amount of time to wait for the motor's shutdown; and a magnetic generator that harvests energy from the rotating field to power the circuits mentioned above. The embedded system is set on the rotor of a 5 HP squirrel cage induction motor, making it difficult to power the system because it is rotating. This problem can be solved with the construction of a magnetic generator device to avoid the need of using batteries or collector rings and will send data to the computer using a wireless NRF24L01 module. For the proposed system, initial validation tests were made using a temperature sensor (DS18b20), as this variable is known as the most important when identifying the need for maintenance and control systems. Few tests have shown promising results that, with further improvements, can prove the feasibility of using sensors in the rotor.

  3. Ultra Low Power Signal Oriented Approach for Wireless Health Monitoring

    PubMed Central

    Marinkovic, Stevan; Popovici, Emanuel

    2012-01-01

    In recent years there is growing pressure on the medical sector to reduce costs while maintaining or even improving the quality of care. A potential solution to this problem is real time and/or remote patient monitoring by using mobile devices. To achieve this, medical sensors with wireless communication, computational and energy harvesting capabilities are networked on, or in, the human body forming what is commonly called a Wireless Body Area Network (WBAN). We present the implementation of a novel Wake Up Receiver (WUR) in the context of standardised wireless protocols, in a signal-oriented WBAN environment and present a novel protocol intended for wireless health monitoring (WhMAC). WhMAC is a TDMA-based protocol with very low power consumption. It utilises WBAN-specific features and a novel ultra low power wake up receiver technology, to achieve flexible and at the same time very low power wireless data transfer of physiological signals. As the main application is in the medical domain, or personal health monitoring, the protocol caters for different types of medical sensors. We define four sensor modes, in which the sensors can transmit data, depending on the sensor type and emergency level. A full power dissipation model is provided for the protocol, with individual hardware and application parameters. Finally, an example application shows the reduction in the power consumption for different data monitoring scenarios. PMID:22969379

  4. Ultra low power signal oriented approach for wireless health monitoring.

    PubMed

    Marinkovic, Stevan; Popovici, Emanuel

    2012-01-01

    In recent years there is growing pressure on the medical sector to reduce costs while maintaining or even improving the quality of care. A potential solution to this problem is real time and/or remote patient monitoring by using mobile devices. To achieve this, medical sensors with wireless communication, computational and energy harvesting capabilities are networked on, or in, the human body forming what is commonly called a Wireless Body Area Network (WBAN). We present the implementation of a novel Wake Up Receiver (WUR) in the context of standardised wireless protocols, in a signal-oriented WBAN environment and present a novel protocol intended for wireless health monitoring (WhMAC). WhMAC is a TDMA-based protocol with very low power consumption. It utilises WBAN-specific features and a novel ultra low power wake up receiver technology, to achieve flexible and at the same time very low power wireless data transfer of physiological signals. As the main application is in the medical domain, or personal health monitoring, the protocol caters for different types of medical sensors. We define four sensor modes, in which the sensors can transmit data, depending on the sensor type and emergency level. A full power dissipation model is provided for the protocol, with individual hardware and application parameters. Finally, an example application shows the reduction in the power consumption for different data monitoring scenarios.

  5. Extensible Computational Chemistry Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-08-09

    ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing the power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of themore » inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

  6. Transforming parts of a differential equations system to difference equations as a method for run-time savings in NONMEM.

    PubMed

    Petersson, K J F; Friberg, L E; Karlsson, M O

    2010-10-01

    Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.

  7. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  8. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  9. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.

  10. An Autonomous Underwater Recorder Based on a Single Board Computer.

    PubMed

    Caldas-Morgan, Manuel; Alvarez-Rosario, Alexander; Rodrigues Padovese, Linilson

    2015-01-01

    As industrial activities continue to grow on the Brazilian coast, underwater sound measurements are becoming of great scientific importance as they are essential to evaluate the impact of these activities on local ecosystems. In this context, the use of commercial underwater recorders is not always the most feasible alternative, due to their high cost and lack of flexibility. Design and construction of more affordable alternatives from scratch can become complex because it requires profound knowledge in areas such as electronics and low-level programming. With the aim of providing a solution; a well succeeded model of a highly flexible, low-cost alternative to commercial recorders was built based on a Raspberry Pi single board computer. A properly working prototype was assembled and it demonstrated adequate performance levels in all tested situations. The prototype was equipped with a power management module which was thoroughly evaluated. It is estimated that it will allow for great battery savings on long-term scheduled recordings. The underwater recording device was successfully deployed at selected locations along the Brazilian coast, where it adequately recorded animal and manmade acoustic events, among others. Although power consumption may not be as efficient as that of commercial and/or micro-processed solutions, the advantage offered by the proposed device is its high customizability, lower development time and inherently, its cost.

  11. Polymorphic Attacks and Network Topology: Application of Concepts from Natural Systems

    ERIC Educational Resources Information Center

    Rangan, Prahalad

    2010-01-01

    The growing complexity of interactions between computers and networks makes the subject of network security a very interesting one. As our dependence on the services provided by computing networks grows, so does our investment in such technology. In this situation, there is a greater risk of occurrence of targeted malicious attacks on computers…

  12. The Need for Optical Means as an Alternative for Electronic Computing

    NASA Technical Reports Server (NTRS)

    Adbeldayem, Hossin; Frazier, Donald; Witherow, William; Paley, Steve; Penn, Benjamin; Bank, Curtis; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    An increasing demand for faster computers is rapidly growing to encounter the fast growing rate of Internet, space communication, and robotic industry. Unfortunately, the Very Large Scale Integration technology is approaching its fundamental limits beyond which the device will be unreliable. Optical interconnections and optical integrated circuits are strongly believed to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by conventional electronics. This paper demonstrates two ultra-fast, all-optical logic gates and a high-density storage medium, which are essential components in building the future optical computer.

  13. Institute for Sustained Performance, Energy, and Resilience (SuPER)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jagode, Heike; Bosilca, George; Danalis, Anthony

    The University of Tennessee (UTK) and University of Texas at El Paso (UTEP) partnership supported the three main thrusts of the SUPER project---performance, energy, and resilience. The UTK-UTEP effort thus helped advance the main goal of SUPER, which was to ensure that DOE's computational scientists can successfully exploit the emerging generation of high performance computing (HPC) systems. This goal is being met by providing application scientists with strategies and tools to productively maximize performance, conserve energy, and attain resilience. The primary vehicle through which UTK provided performance measurement support to SUPER and the larger HPC community is the Performance Applicationmore » Programming Interface (PAPI). PAPI is an ongoing project that provides a consistent interface and methodology for collecting hardware performance information from various hardware and software components, including most major CPUs, GPUs and accelerators, interconnects, I/O systems, and power interfaces, as well as virtual cloud environments. The PAPI software is widely used for performance modeling of scientific and engineering applications---for example, the HOMME (High Order Methods Modeling Environment) climate code, and the GAMESS and NWChem computational chemistry codes---on DOE supercomputers. PAPI is widely deployed as middleware for use by higher-level profiling, tracing, and sampling tools (e.g., CrayPat, HPCToolkit, Scalasca, Score-P, TAU, Vampir, PerfExpert), making it the de facto standard for hardware counter analysis. PAPI has established itself as fundamental software infrastructure in every application domain (spanning academia, government, and industry), where improving performance can be mission critical. Ultimately, as more application scientists migrate their applications to HPC platforms, they will benefit from the extended capabilities this grant brought to PAPI to analyze and optimize performance in these environments, whether they use PAPI directly, or via third-party performance tools. Capabilities added to PAPI through this grant include support for new architectures such as the lastest GPU and Xeon Phi accelerators, and advanced power measurement and management features. Another important topic for the UTK team was providing support for a rich ecosystem of different fault management strategies in the context of parallel computing. Our long term efforts have been oriented toward proposing flexible strategies and providing building boxes that application developers can use to build the most efficient fault management technique for their application. These efforts span across the entire software spectrum, from theoretical models of existing strategies to easily assess their performance, to algorithmic modifications to take advantage of specific mathematical properties for data redundancy and to extensions to widely used programming paradigms to empower the application developers to deal with all types of faults. We have also continued our tight collaborations with users to help them adopt these technologies to ensure their application always deliver meaningful scientific data. Large supercomputer systems are becoming more and more power and energy constrained, and future systems and applications running on them will need to be optimized to run under power caps and/or minimize energy consumption. The UTEP team contributed to the SUPER energy thrust by developing power modeling methodologies and investigating power management strategies. Scalability modeling results showed that some applications can scale better with respect to an increasing power budget than with respect to only the number of processors. Power management, in particular shifting power to processors on the critical path of an application execution, can reduce perturbation due to system noise and other sources of runtime variability, which are growing problems on large-scale power-constrained computer systems.« less

  14. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-02-05

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  15. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D

    2012-10-23

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  16. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  17. Stochastic Optimization for Unit Commitment-A Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Qipeng P.; Wang, Jianhui; Liu, Andrew L.

    2015-07-01

    Optimization models have been widely used in the power industry to aid the decision-making process of scheduling and dispatching electric power generation resources, a process known as unit commitment (UC). Since UC's birth, there have been two major waves of revolution on UC research and real life practice. The first wave has made mixed integer programming stand out from the early solution and modeling approaches for deterministic UC, such as priority list, dynamic programming, and Lagrangian relaxation. With the high penetration of renewable energy, increasing deregulation of the electricity industry, and growing demands on system reliability, the next wave ismore » focused on transitioning from traditional deterministic approaches to stochastic optimization for unit commitment. Since the literature has grown rapidly in the past several years, this paper is to review the works that have contributed to the modeling and computational aspects of stochastic optimization (SO) based UC. Relevant lines of future research are also discussed to help transform research advances into real-world applications.« less

  18. Open-systems architecture of a standardized command interface chip-set for switching and control of a spacecraft power bus

    NASA Technical Reports Server (NTRS)

    Ruiz, Ian B.; Burke, Gary R.; Lung, Gerald; Whitaker, William D.; Nowicki, Robert M.

    2004-01-01

    The Jet Propulsion Laboratory (JPL) has developed a command interface chip-set that primarily consists of two mixed-signal ASICs'; the Command Interface ASIC (CIA) and Analog Interface ASIC (AIA). The Open-systems architecture employed during the design of this chip-set enables its use as both an intelligent gateway between the system's flight computer and the control, actuation, and activation of the spacecraft's loads, valves, and pyrotechnics respectfully as well as the regulator of the spacecraft power bus. Furthermore, the architecture is highly adaptable and employed fault-tolerant design methods enabling a host of other mission uses including reliable remote data collection. The objective of this design is to both provide a needed flight component that meets the stringent environmental requirements of current deep space missions and to add a new element to a growing library that can be used as a standard building block for future missions to the outer planets.

  19. Mass Storage and Retrieval at Rome Laboratory

    NASA Technical Reports Server (NTRS)

    Kann, Joshua L.; Canfield, Brady W.; Jamberdino, Albert A.; Clarke, Bernard J.; Daniszewski, Ed; Sunada, Gary

    1996-01-01

    As the speed and power of modern digital computers continues to advance, the demands on secondary mass storage systems grow. In many cases, the limitations of existing mass storage reduce the overall effectiveness of the computing system. Image storage and retrieval is one important area where improved storage technologies are required. Three dimensional optical memories offer the advantage of large data density, on the order of 1 Tb/cm(exp 3), and faster transfer rates because of the parallel nature of optical recording. Such a system allows for the storage of multiple-Gbit sized images, which can be recorded and accessed at reasonable rates. Rome Laboratory is currently investigating several techniques to perform three-dimensional optical storage including holographic recording, two-photon recording, persistent spectral-hole burning, multi-wavelength DNA recording, and the use of bacteriorhodopsin as a recording material. In this paper, the current status of each of these on-going efforts is discussed. In particular, the potential payoffs as well as possible limitations are addressed.

  20. GO2OGS 1.0: a versatile workflow to integrate complex geological information with fault data into numerical simulation models

    NASA Astrophysics Data System (ADS)

    Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.

    2015-11-01

    We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.

  1. High performance data transfer

    NASA Astrophysics Data System (ADS)

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  2. Energy management strategy for fuel cell-supercapacitor hybrid vehicles based on prediction of energy demand

    NASA Astrophysics Data System (ADS)

    Carignano, Mauro G.; Costa-Castelló, Ramon; Roda, Vicente; Nigro, Norberto M.; Junco, Sergio; Feroldi, Diego

    2017-08-01

    Offering high efficiency and producing zero emissions Fuel Cells (FCs) represent an excellent alternative to internal combustion engines for powering vehicles to alleviate the growing pollution in urban environments. Due to inherent limitations of FCs which lead to slow transient response, FC-based vehicles incorporate an energy storage system to cover the fast power variations. This paper considers a FC/supercapacitor platform that configures a hard constrained powertrain providing an adverse scenario for the energy management strategy (EMS) in terms of fuel economy and drivability. Focusing on palliating this problem, this paper presents a novel EMS based on the estimation of short-term future energy demand and aiming at maintaining the state of energy of the supercapacitor between two limits, which are computed online. Such limits are designed to prevent active constraint situations of both FC and supercapacitor, avoiding the use of friction brakes and situations of non-power compliance in a short future horizon. Simulation and experimentation in a case study corresponding to a hybrid electric bus show improvements on hydrogen consumption and power compliance compared to the widely reported Equivalent Consumption Minimization Strategy. Also, the comparison with the optimal strategy via Dynamic Programming shows a room for improvement to the real-time strategies.

  3. Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed

    NASA Technical Reports Server (NTRS)

    Mackin, Michael A.

    1995-01-01

    This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.

  4. Current And Future Directions Of Lens Design Software

    NASA Astrophysics Data System (ADS)

    Gustafson, Darryl E.

    1983-10-01

    The most effective environment for doing lens design continues to evolve as new computer hardware and software tools become available. Important recent hardware developments include: Low-cost but powerful interactive multi-user 32 bit computers with virtual memory that are totally software-compatible with prior larger and more expensive members of the family. A rapidly growing variety of graphics devices for both hard-copy and screen graphics, including many with color capability. In addition, with optical design software readily accessible in many forms, optical design has become a part-time activity for a large number of engineers instead of being restricted to a small number of full-time specialists. A designer interface that is friendly for the part-time user while remaining efficient for the full-time designer is thus becoming more important as well as more practical. Along with these developments, software tools in other scientific and engineering disciplines are proliferating. Thus, the optical designer is less and less unique in his use of computer-aided techniques and faces the challenge and opportunity of efficiently communicating his designs to other computer-aided-design (CAD), computer-aided-manufacturing (CAM), structural, thermal, and mechanical software tools. This paper will address the impact of these developments on the current and future directions of the CODE VTM optical design software package, its implementation, and the resulting lens design environment.

  5. A personal computer-based nuclear magnetic resonance spectrometer

    NASA Astrophysics Data System (ADS)

    Job, Constantin; Pearson, Robert M.; Brown, Michael F.

    1994-11-01

    Nuclear magnetic resonance (NMR) spectroscopy using personal computer-based hardware has the potential of enabling the application of NMR methods to fields where conventional state of the art equipment is either impractical or too costly. With such a strategy for data acquisition and processing, disciplines including civil engineering, agriculture, geology, archaeology, and others have the possibility of utilizing magnetic resonance techniques within the laboratory or conducting applications directly in the field. Another aspect is the possibility of utilizing existing NMR magnets which may be in good condition but unused because of outdated or nonrepairable electronics. Moreover, NMR applications based on personal computer technology may open up teaching possibilities at the college or even secondary school level. The goal of developing such a personal computer (PC)-based NMR standard is facilitated by existing technologies including logic cell arrays, direct digital frequency synthesis, use of PC-based electrical engineering software tools to fabricate electronic circuits, and the use of permanent magnets based on neodymium-iron-boron alloy. Utilizing such an approach, we have been able to place essentially an entire NMR spectrometer console on two printed circuit boards, with the exception of the receiver and radio frequency power amplifier. Future upgrades to include the deuterium lock and the decoupler unit are readily envisioned. The continued development of such PC-based NMR spectrometers is expected to benefit from the fast growing, practical, and low cost personal computer market.

  6. Wind energy applications for municipal water services: Opportunities, situational analyses, and case studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flowers, L.; Miner-Nordstrom, L.

    2006-01-01

    As communities grow, greater demands are placed on water supplies, wastewater services, and the electricity needed to power the growing water services infrastructure. Water is also a critical resource for thermoelectric power plants. Future population growth in the United States is therefore expected to heighten competition for water resources. Especially in arid U.S. regions, communities may soon face hard choices with respect to water and electric power. Many parts of the United States with increasing water stresses also have significant wind energy resources. Wind power is the fastest-growing electric generation source in the United States and is decreasing in costmore » to be competitive with thermoelectric generation. Wind energy can potentially offer communities in water-stressed areas the option of economically meeting increasing energy needs without increasing demands on valuable water resources. Wind energy can also provide targeted energy production to serve critical local water-system needs. The U.S. Department of Energy (DOE) Wind Energy Technologies Program has been exploring the potential for wind power to meet growing challenges for water supply and treatment. The DOE is currently characterizing the U.S. regions that are most likely to benefit from wind-water applications and is also exploring the associated technical and policy issues associated with bringing wind energy to bear on water resource challenges.« less

  7. 3D Printing in the Laboratory: Maximize Time and Funds with Customized and Open-Source Labware.

    PubMed

    Coakley, Meghan; Hurt, Darrell E

    2016-08-01

    3D printing, also known as additive manufacturing, is the computer-guided process of fabricating physical objects by depositing successive layers of material. It has transformed manufacturing across virtually every industry, bringing about incredible advances in research and medicine. The rapidly growing consumer market now includes convenient and affordable "desktop" 3D printers. These are being used in the laboratory to create custom 3D-printed equipment, and a growing community of designers are contributing open-source, cost-effective innovations that can be used by both professionals and enthusiasts. User stories from investigators at the National Institutes of Health and the biomedical research community demonstrate the power of 3D printing to save valuable time and funding. While adoption of 3D printing has been slow in the biosciences to date, the potential is vast. The market predicts that within several years, 3D printers could be commonplace within the home; with so many practical uses for 3D printing, we anticipate that the technology will also play an increasingly important role in the laboratory. © 2016 Society for Laboratory Automation and Screening.

  8. 3D Printing in the Laboratory: Maximize Time and Funds with Customized and Open-Source Labware

    PubMed Central

    Coakley, Meghan; Hurt, Darrell E.

    2016-01-01

    3D printing, also known as additive manufacturing, is the computer-guided process of fabricating physical objects by depositing successive layers of material. It has transformed manufacturing across virtually every industry, bringing about incredible advances in research and medicine. The rapidly growing consumer market now includes convenient and affordable “desktop” 3D printers. These are being used in the laboratory to create custom 3D-printed equipment, and a growing community of designers are contributing open-source, cost-effective innovations that can be used by both professionals and enthusiasts. User stories from investigators at the National Institutes of Health and the biomedical research community demonstrate the power of 3D printing to save valuable time and funding. While adoption of 3D printing has been slow in the biosciences to date, the potential is vast. The market predicts that within several years, 3D printers could be commonplace within the home; with so many practical uses for 3D printing, we anticipate that the technology will also play an increasingly important role in the laboratory. PMID:27197798

  9. Silicon photonics for high-performance interconnection networks

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr

    2011-12-01

    We assert in the course of this work that silicon photonics has the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems, and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. This work showcases that chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, enable unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of this work, we demonstrate such feasibility of waveguides, modulators, switches, and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. Furthermore, we leverage the unique properties of available silicon photonic materials to create novel silicon photonic devices, subsystems, network topologies, and architectures to enable unprecedented performance of these photonic interconnection networks and computing systems. We show that the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. Furthermore, we explore the immense potential of all-optical functionalities implemented using parametric processing in the silicon platform, demonstrating unique methods that have the ability to revolutionize computation and communication. Silicon photonics enables new sets of opportunities that we can leverage for performance gains, as well as new sets of challenges that we must solve. Leveraging its inherent compatibility with standard fabrication techniques of the semiconductor industry, combined with its capability of dense integration with advanced microelectronics, silicon photonics also offers a clear path toward commercialization through low-cost mass-volume production. Combining empirical validations of feasibility, demonstrations of massive performance gains in large-scale systems, and the potential for commercial penetration of silicon photonics, the impact of this work will become evident in the many decades that follow.

  10. Enabling Co-Design of Multi-Layer Exascale Storage Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carothers, Christopher

    Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less

  11. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  12. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  13. Computer-operated analytical platform for the determination of nutrients in hydroponic systems.

    PubMed

    Rius-Ruiz, F Xavier; Andrade, Francisco J; Riu, Jordi; Rius, F Xavier

    2014-03-15

    Hydroponics is a water, energy, space, and cost efficient system for growing plants in constrained spaces or land exhausted areas. Precise control of hydroponic nutrients is essential for growing healthy plants and producing high yields. In this article we report for the first time on a new computer-operated analytical platform which can be readily used for the determination of essential nutrients in hydroponic growing systems. The liquid-handling system uses inexpensive components (i.e., peristaltic pump and solenoid valves), which are discretely computer-operated to automatically condition, calibrate and clean a multi-probe of solid-contact ion-selective electrodes (ISEs). These ISEs, which are based on carbon nanotubes, offer high portability, robustness and easy maintenance and storage. With this new computer-operated analytical platform we performed automatic measurements of K(+), Ca(2+), NO3(-) and Cl(-) during tomato plants growth in order to assure optimal nutritional uptake and tomato production. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Flood inundation extent mapping based on block compressed tracing

    NASA Astrophysics Data System (ADS)

    Shen, Dingtao; Rui, Yikang; Wang, Jiechen; Zhang, Yu; Cheng, Liang

    2015-07-01

    Flood inundation extent, depth, and duration are important factors affecting flood hazard evaluation. At present, flood inundation analysis is based mainly on a seeded region-growing algorithm, which is an inefficient process because it requires excessive recursive computations and it is incapable of processing massive datasets. To address this problem, we propose a block compressed tracing algorithm for mapping the flood inundation extent, which reads the DEM data in blocks before transferring them to raster compression storage. This allows a smaller computer memory to process a larger amount of data, which solves the problem of the regular seeded region-growing algorithm. In addition, the use of a raster boundary tracing technique allows the algorithm to avoid the time-consuming computations required by the seeded region-growing. Finally, we conduct a comparative evaluation in the Chin-sha River basin, results show that the proposed method solves the problem of flood inundation extent mapping based on massive DEM datasets with higher computational efficiency than the original method, which makes it suitable for practical applications.

  15. Early repositioning through compound set enrichment analysis: a knowledge-recycling strategy.

    PubMed

    Temesi, Gergely; Bolgár, Bence; Arany, Adám; Szalai, Csaba; Antal, Péter; Mátyus, Péter

    2014-04-01

    Despite famous serendipitous drug repositioning success stories, systematic projects have not yet delivered the expected results. However, repositioning technologies are gaining ground in different phases of routine drug development, together with new adaptive strategies. We demonstrate the power of the compound information pool, the ever-growing heterogeneous information repertoire of approved drugs and candidates as an invaluable catalyzer in this transition. Systematic, computational utilization of this information pool for candidates in early phases is an open research problem; we propose a novel application of the enrichment analysis statistical framework for fusion of this information pool, specifically for the prediction of indications. Pharmaceutical consequences are formulated for a systematic and continuous knowledge recycling strategy, utilizing this information pool throughout the drug-discovery pipeline.

  16. Economic analysis of wind-powered farmhouse and farm building heating systems

    NASA Astrophysics Data System (ADS)

    Stafford, R. W.; Greeb, F. J.; Smith, M. H.; Deschenes, C.; Weaver, N. L.

    1981-01-01

    The break even values of wind energy for selected farmhouses and farm buildings focusing on the effects of thermal storage on the use of WECS production were evaluated. Farmhouse structural models include three types derived from a national survey: an older, a more modern, and a passive solar structure. The eight farm building applications include: (1) poultry layers; (2) poultry brooding/layers; (3) poultry broilers; (4) poultry turkeys; (5) swine farrowing; (6) swine growing/finishing; (7) dairy; and (8) lambing. The farm buildings represent the spectrum of animal types, heating energy use, and major contributions to national agricultural economic values. All energy analyses are based on hour by hour computations which allow for growth of animals, sensible and latent heat production, and ventilation requirements.

  17. Detecting spatial defects in colored patterns using self-oscillating gels

    NASA Astrophysics Data System (ADS)

    Fang, Yan; Yashin, Victor V.; Dickerson, Samuel J.; Balazs, Anna C.

    2018-06-01

    With the growing demand for wearable computers, there is a need for material systems that can perform computational tasks without relying on external electrical power. Using theory and simulation, we design a material system that "computes" by integrating the inherent behavior of self-oscillating gels undergoing the Belousov-Zhabotinsky (BZ) reaction and piezoelectric (PZ) plates. These "BZ-PZ" units are connected electrically to form a coupled oscillator network, which displays specific modes of synchronization. We exploit this attribute in employing multiple BZ-PZ networks to perform pattern matching on complex multi-dimensional data, such as colored images. By decomposing a colored image into sets of binary vectors, we use each BZ-PZ network, or "channel," to store distinct information about the color and the shape of the image and perform the pattern matching operation. Our simulation results indicate that the multi-channel BZ-PZ device can detect subtle differences between the input and stored patterns, such as the color variation of one pixel or a small change in the shape of an object. To demonstrate a practical application, we utilize our system to process a colored Quick Response code and show its potential in cryptography and steganography.

  18. Domain fusion analysis by applying relational algebra to protein sequence and domain databases

    PubMed Central

    Truong, Kevin; Ikura, Mitsuhiko

    2003-01-01

    Background Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. Results This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at . Conclusion As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time. PMID:12734020

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    The Second SIAM Conference on Computational Science and Engineering was held in San Diego from February 10-12, 2003. Total conference attendance was 553. This is a 23% increase in attendance over the first conference. The focus of this conference was to draw attention to the tremendous range of major computational efforts on large problems in science and engineering, to promote the interdisciplinary culture required to meet these large-scale challenges, and to encourage the training of the next generation of computational scientists. Computational Science & Engineering (CS&E) is now widely accepted, along with theory and experiment, as a crucial third modemore » of scientific investigation and engineering design. Aerospace, automotive, biological, chemical, semiconductor, and other industrial sectors now rely on simulation for technical decision support. For federal agencies also, CS&E has become an essential support for decisions on resources, transportation, and defense. CS&E is, by nature, interdisciplinary. It grows out of physical applications and it depends on computer architecture, but at its heart are powerful numerical algorithms and sophisticated computer science techniques. From an applied mathematics perspective, much of CS&E has involved analysis, but the future surely includes optimization and design, especially in the presence of uncertainty. Another mathematical frontier is the assimilation of very large data sets through such techniques as adaptive multi-resolution, automated feature search, and low-dimensional parameterization. The themes of the 2003 conference included, but were not limited to: Advanced Discretization Methods; Computational Biology and Bioinformatics; Computational Chemistry and Chemical Engineering; Computational Earth and Atmospheric Sciences; Computational Electromagnetics; Computational Fluid Dynamics; Computational Medicine and Bioengineering; Computational Physics and Astrophysics; Computational Solid Mechanics and Materials; CS&E Education; Meshing and Adaptivity; Multiscale and Multiphysics Problems; Numerical Algorithms for CS&E; Discrete and Combinatorial Algorithms for CS&E; Inverse Problems; Optimal Design, Optimal Control, and Inverse Problems; Parallel and Distributed Computing; Problem-Solving Environments; Software and Wddleware Systems; Uncertainty Estimation and Sensitivity Analysis; and Visualization and Computer Graphics.« less

  20. Finite-difference fluid dynamics computer mathematical models for the design and interpretation of experiments for space flight. [atmospheric general circulation experiment, convection in a float zone, and the Bridgman-Stockbarger crystal growing system

    NASA Technical Reports Server (NTRS)

    Roberts, G. O.; Fowlis, W. W.; Miller, T. L.

    1984-01-01

    Numerical methods are used to design a spherical baroclinic flow model experiment of the large scale atmosphere flow for Spacelab. The dielectric simulation of radial gravity is only dominant in a low gravity environment. Computer codes are developed to study the processes at work in crystal growing systems which are also candidates for space flight. Crystalline materials rarely achieve their potential properties because of imperfections and component concentration variations. Thermosolutal convection in the liquid melt can be the cause of these imperfections. Such convection is suppressed in a low gravity environment. Two and three dimensional finite difference codes are being used for this work. Nonuniform meshes and implicit iterative methods are used. The iterative method for steady solutions is based on time stepping but has the options of different time steps for velocity and temperature and of a time step varying smoothly with position according to specified powers of the mesh spacings. This allows for more rapid convergence. The code being developed for the crystal growth studies allows for growth of the crystal as the solid-liquid interface. The moving interface is followed using finite differences; shape variations are permitted. For convenience in applying finite differences in the solid and liquid, a time dependent coordinate transformation is used to make this interface a coordinate surface.

  1. Image-guided techniques in renal and hepatic interventions.

    PubMed

    Najmaei, Nima; Mostafavi, Kamal; Shahbazi, Sahar; Azizian, Mahdi

    2013-12-01

    Development of new imaging technologies and advances in computing power have enabled the physicians to perform medical interventions on the basis of high-quality 3D and/or 4D visualization of the patient's organs. Preoperative imaging has been used for planning the surgery, whereas intraoperative imaging has been widely employed to provide visual feedback to a clinician when he or she is performing the procedure. In the past decade, such systems demonstrated great potential in image-guided minimally invasive procedures on different organs, such as brain, heart, liver and kidneys. This article focuses on image-guided interventions and surgery in renal and hepatic surgeries. A comprehensive search of existing electronic databases was completed for the period of 2000-2011. Each contribution was assessed by the authors for relevance and inclusion. The contributions were categorized on the basis of the type of operation/intervention, imaging modality and specific techniques such as image fusion and augmented reality, and organ motion tracking. As a result, detailed classification and comparative study of various contributions in image-guided renal and hepatic interventions are provided. In addition, the potential future directions have been sketched. With a detailed review of the literature, potential future trends in development of image-guided abdominal interventions are identified, namely, growing use of image fusion and augmented reality, computer-assisted and/or robot-assisted interventions, development of more accurate registration and navigation techniques, and growing applications of intraoperative magnetic resonance imaging. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Symmetry structure in discrete models of biochemical systems: natural subsystems and the weak control hierarchy in a new model of computation driven by interactions.

    PubMed

    Nehaniv, Chrystopher L; Rhodes, John; Egri-Nagy, Attila; Dini, Paolo; Morris, Eric Rothstein; Horváth, Gábor; Karimi, Fariba; Schreckling, Daniel; Schilstra, Maria J

    2015-07-28

    Interaction computing is inspired by the observation that cell metabolic/regulatory systems construct order dynamically, through constrained interactions between their components and based on a wide range of possible inputs and environmental conditions. The goals of this work are to (i) identify and understand mathematically the natural subsystems and hierarchical relations in natural systems enabling this and (ii) use the resulting insights to define a new model of computation based on interactions that is useful for both biology and computation. The dynamical characteristics of the cellular pathways studied in systems biology relate, mathematically, to the computational characteristics of automata derived from them, and their internal symmetry structures to computational power. Finite discrete automata models of biological systems such as the lac operon, the Krebs cycle and p53-mdm2 genetic regulation constructed from systems biology models have canonically associated algebraic structures (their transformation semigroups). These contain permutation groups (local substructures exhibiting symmetry) that correspond to 'pools of reversibility'. These natural subsystems are related to one another in a hierarchical manner by the notion of 'weak control'. We present natural subsystems arising from several biological examples and their weak control hierarchies in detail. Finite simple non-Abelian groups are found in biological examples and can be harnessed to realize finitary universal computation. This allows ensembles of cells to achieve any desired finitary computational transformation, depending on external inputs, via suitably constrained interactions. Based on this, interaction machines that grow and change their structure recursively are introduced and applied, providing a natural model of computation driven by interactions.

  3. Computer Literacy and Societal Impact of Computers: Education and Manpower.

    ERIC Educational Resources Information Center

    Hamblen, John W.

    The use of computers in education and general society is continually expanding, but the growing need for highly educated computer technologists will continue unless universities take major steps toward the reallocation of their resources in favor of the burgeoning computer science departments, and government and industry provide a significant…

  4. Inquiring Minds

    Science.gov Websites

    Proposed Projects and Experiments Fermilab's Tevatron Questions for the Universe Theory Computing High -performance Computing Grid Computing Networking Mass Storage Plan for the Future State of the Laboratory Homeland Security Industry Computing Sciences Workforce Development A Growing List Historic Results

  5. The Computers in Our Lives.

    ERIC Educational Resources Information Center

    Uthe, Elaine F.

    1982-01-01

    Describes the growing use of computers in our world and how their use will affect vocational education. Discusses recordkeeping and database functions, computer graphics, problem-solving simulations, satellite communications, home computers, and how they will affect office education, home economics education, marketing and distributive education,…

  6. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  7. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  8. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  9. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  10. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  11. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  12. An Autonomous Underwater Recorder Based on a Single Board Computer

    PubMed Central

    Caldas-Morgan, Manuel; Alvarez-Rosario, Alexander; Rodrigues Padovese, Linilson

    2015-01-01

    As industrial activities continue to grow on the Brazilian coast, underwater sound measurements are becoming of great scientific importance as they are essential to evaluate the impact of these activities on local ecosystems. In this context, the use of commercial underwater recorders is not always the most feasible alternative, due to their high cost and lack of flexibility. Design and construction of more affordable alternatives from scratch can become complex because it requires profound knowledge in areas such as electronics and low-level programming. With the aim of providing a solution; a well succeeded model of a highly flexible, low-cost alternative to commercial recorders was built based on a Raspberry Pi single board computer. A properly working prototype was assembled and it demonstrated adequate performance levels in all tested situations. The prototype was equipped with a power management module which was thoroughly evaluated. It is estimated that it will allow for great battery savings on long-term scheduled recordings. The underwater recording device was successfully deployed at selected locations along the Brazilian coast, where it adequately recorded animal and manmade acoustic events, among others. Although power consumption may not be as efficient as that of commercial and/or micro-processed solutions, the advantage offered by the proposed device is its high customizability, lower development time and inherently, its cost. PMID:26076479

  13. 78 FR 47804 - Verification, Validation, Reviews, and Audits for Digital Computer Software Used in Safety...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-06

    ..., ``Configuration Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., Reviews, and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This...

  14. Nuclear power generation and fuel cycle report 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-09-01

    Nuclear power is an important source of electric energy and the amount of nuclear-generated electricity continued to grow as the performance of nuclear power plants improved. In 1996, nuclear power plants supplied 23 percent of the electricity production for countries with nuclear units, and 17 percent of the total electricity generated worldwide. However, the likelihood of nuclear power assuming a much larger role or even retaining its current share of electricity generation production is uncertain. The industry faces a complex set of issues including economic competitiveness, social acceptance, and the handling of nuclear waste, all of which contribute to themore » uncertain future of nuclear power. Nevertheless, for some countries the installed nuclear generating capacity is projected to continue to grow. Insufficient indigenous energy resources and concerns over energy independence make nuclear electric generation a viable option, especially for the countries of the Far East.« less

  15. Factors Influencing the Adoption of Cloud Computing by Decision Making Managers

    ERIC Educational Resources Information Center

    Ross, Virginia Watson

    2010-01-01

    Cloud computing is a growing field, addressing the market need for access to computing resources to meet organizational computing requirements. The purpose of this research is to evaluate the factors that influence an organization in their decision whether to adopt cloud computing as a part of their strategic information technology planning.…

  16. Foundational Report Series. Advanced Distribution management Systems for Grid Modernization (Importance of DMS for Distribution Grid Modernization)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianhui

    2015-09-01

    Grid modernization is transforming the operation and management of electric distribution systems from manual, paper-driven business processes to electronic, computer-assisted decisionmaking. At the center of this business transformation is the distribution management system (DMS), which provides a foundation from which optimal levels of performance can be achieved in an increasingly complex business and operating environment. Electric distribution utilities are facing many new challenges that are dramatically increasing the complexity of operating and managing the electric distribution system: growing customer expectations for service reliability and power quality, pressure to achieve better efficiency and utilization of existing distribution system assets, and reductionmore » of greenhouse gas emissions by accommodating high penetration levels of distributed generating resources powered by renewable energy sources (wind, solar, etc.). Recent “storm of the century” events in the northeastern United States and the lengthy power outages and customer hardships that followed have greatly elevated the need to make power delivery systems more resilient to major storm events and to provide a more effective electric utility response during such regional power grid emergencies. Despite these newly emerging challenges for electric distribution system operators, only a small percentage of electric utilities have actually implemented a DMS. This paper discusses reasons why a DMS is needed and why the DMS may emerge as a mission-critical system that will soon be considered essential as electric utilities roll out their grid modernization strategies.« less

  17. Simulation training tools for nonlethal weapons using gaming environments

    NASA Astrophysics Data System (ADS)

    Donne, Alexsana; Eagan, Justin; Tse, Gabriel; Vanderslice, Tom; Woods, Jerry

    2006-05-01

    Modern simulation techniques have a growing role for evaluating new technologies and for developing cost-effective training programs. A mission simulator facilitates the productive exchange of ideas by demonstration of concepts through compellingly realistic computer simulation. Revolutionary advances in 3D simulation technology have made it possible for desktop computers to process strikingly realistic and complex interactions with results depicted in real-time. Computer games now allow for multiple real human players and "artificially intelligent" (AI) simulated robots to play together. Advances in computer processing power have compensated for the inherent intensive calculations required for complex simulation scenarios. The main components of the leading game-engines have been released for user modifications, enabling game enthusiasts and amateur programmers to advance the state-of-the-art in AI and computer simulation technologies. It is now possible to simulate sophisticated and realistic conflict situations in order to evaluate the impact of non-lethal devices as well as conflict resolution procedures using such devices. Simulations can reduce training costs as end users: learn what a device does and doesn't do prior to use, understand responses to the device prior to deployment, determine if the device is appropriate for their situational responses, and train with new devices and techniques before purchasing hardware. This paper will present the status of SARA's mission simulation development activities, based on the Half-Life gameengine, for the purpose of evaluating the latest non-lethal weapon devices, and for developing training tools for such devices.

  18. Bounds on the power of proofs and advice in general physical theories.

    PubMed

    Lee, Ciarán M; Hoban, Matty J

    2016-06-01

    Quantum theory presents us with the tools for computational and communication advantages over classical theory. One approach to uncovering the source of these advantages is to determine how computation and communication power vary as quantum theory is replaced by other operationally defined theories from a broad framework of such theories. Such investigations may reveal some of the key physical features required for powerful computation and communication. In this paper, we investigate how simple physical principles bound the power of two different computational paradigms which combine computation and communication in a non-trivial fashion: computation with advice and interactive proof systems. We show that the existence of non-trivial dynamics in a theory implies a bound on the power of computation with advice. Moreover, we provide an explicit example of a theory with no non-trivial dynamics in which the power of computation with advice is unbounded. Finally, we show that the power of simple interactive proof systems in theories where local measurements suffice for tomography is non-trivially bounded. This result provides a proof that [Formula: see text] is contained in [Formula: see text], which does not make use of any uniquely quantum structure-such as the fact that observables correspond to self-adjoint operators-and thus may be of independent interest.

  19. The Power of Questioning: Guiding Student Investigations

    ERIC Educational Resources Information Center

    McGough, Julie V.; Nyberg, Lisa M.

    2015-01-01

    This pedagogical picture book is a powerful tool in a small package. The authors of "The Power of Questioning" invite you to nurture the potential for learning that grows out of children's irrepressible urges to ask questions. The book's foundation is a three-part instructional model, Powerful Practices, grounded in questioning,…

  20. Social Play at the Computer: Preschoolers Scaffold and Support Peers' Computer Competence.

    ERIC Educational Resources Information Center

    Freeman, Nancy K.; Somerindyke, Jennifer

    2001-01-01

    Describes preschoolers' collaboration during free play in a computer lab, focusing on the computer's contribution to active, peer-mediated learning. Discusses these observations in terms of Parten's insights on children's social play and Vygotsky's socio-cultural learning theory, noting that the children scaffolded each other's growing computer…

  1. The origins of computer weather prediction and climate modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Peter

    2008-03-20

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. Amore » fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.« less

  2. The origins of computer weather prediction and climate modeling

    NASA Astrophysics Data System (ADS)

    Lynch, Peter

    2008-03-01

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.

  3. Einstein@Home Discovery of 24 Pulsars in the Parkes Multi-beam Pulsar Survey

    NASA Astrophysics Data System (ADS)

    Knispel, B.; Eatough, R. P.; Kim, H.; Keane, E. F.; Allen, B.; Anderson, D.; Aulbert, C.; Bock, O.; Crawford, F.; Eggenstein, H.-B.; Fehrmann, H.; Hammer, D.; Kramer, M.; Lyne, A. G.; Machenschalk, B.; Miller, R. B.; Papa, M. A.; Rastawicki, D.; Sarkissian, J.; Siemens, X.; Stappers, B. W.

    2013-09-01

    We have conducted a new search for radio pulsars in compact binary systems in the Parkes multi-beam pulsar survey (PMPS) data, employing novel methods to remove the Doppler modulation from binary motion. This has yielded unparalleled sensitivity to pulsars in compact binaries. The required computation time of ≈17, 000 CPU core years was provided by the distributed volunteer computing project Einstein@Home, which has a sustained computing power of about 1 PFlop s-1. We discovered 24 new pulsars in our search, 18 of which were isolated pulsars, and 6 were members of binary systems. Despite the wide filterbank channels and relatively slow sampling time of the PMPS data, we found pulsars with very large ratios of dispersion measure (DM) to spin period. Among those is PSR J1748-3009, the millisecond pulsar with the highest known DM (≈420 pc cm-3). We also discovered PSR J1840-0643, which is in a binary system with an orbital period of 937 days, the fourth largest known. The new pulsar J1750-2536 likely belongs to the rare class of intermediate-mass binary pulsars. Three of the isolated pulsars show long-term nulling or intermittency in their emission, further increasing this growing family. Our discoveries demonstrate the value of distributed volunteer computing for data-driven astronomy and the importance of applying new analysis methods to extensively searched data.

  4. Applications of Out-of-Domain Knowledge in Students' Reasoning about Computer Program State

    ERIC Educational Resources Information Center

    Lewis, Colleen Marie

    2012-01-01

    To meet a growing demand and a projected deficit in the supply of computer professionals (NCWIT, 2009), it is of vital importance to expand students' access to computer science. However, many researchers in the computer science education community unproductively assume that some students lack an innate ability for computer science and…

  5. The Use of Computers in Early Childhood Education: A Growing Concern.

    ERIC Educational Resources Information Center

    Tse, Miranda Siu Ping

    A rationale for introducing computers in early childhood education in Hong Kong is presented. Findings from a survey of computer use in preschools, and case studies of preschools in which children use computers but do not achieve desired results, provide the background for this discussion. Arguments advanced for the use of computers concern…

  6. Statistical detection of EEG synchrony using empirical bayesian inference.

    PubMed

    Singh, Archana K; Asoh, Hideki; Takeda, Yuji; Phillips, Steven

    2015-01-01

    There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV) between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR) suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001) for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron's Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.

  7. The hydrodynamic theory of detonation

    NASA Technical Reports Server (NTRS)

    Langweiler, Heinz

    1939-01-01

    This report derives equations containing only directly measurable constants for the quantities involved in the hydrodynamic theory of detonation. The stable detonation speed, D, is revealed as having the lowest possible value in the case of positive material velocity, by finding the minimum of the Du curve (u denotes the speed of the gases of combustion). A study of the conditions of energy and impulse in freely suspended detonating systems leads to the disclosure of a rarefaction front traveling at a lower speed behind the detonation front; its velocity is computed. The latent energy of the explosive passes into the steadily growing detonation zone - the region between the detonation front and the rarefaction front. The conclusions lead to a new definition of the concept of shattering power. The calculations are based on the behavior of trinitrotoluene.

  8. Mining Quality Phrases from Massive Text Corpora

    PubMed Central

    Liu, Jialu; Shang, Jingbo; Wang, Chi; Ren, Xiang; Han, Jiawei

    2015-01-01

    Text data are ubiquitous and play an essential role in big data applications. However, text data are mostly unstructured. Transforming unstructured text into structured units (e.g., semantically meaningful phrases) will substantially reduce semantic ambiguity and enhance the power and efficiency at manipulating such data using database technology. Thus mining quality phrases is a critical research problem in the field of databases. In this paper, we propose a new framework that extracts quality phrases from text corpora integrated with phrasal segmentation. The framework requires only limited training but the quality of phrases so generated is close to human judgment. Moreover, the method is scalable: both computation time and required space grow linearly as corpus size increases. Our experiments on large text corpora demonstrate the quality and efficiency of the new method. PMID:26705375

  9. Patch-based models and algorithms for image processing: a review of the basic principles and methods, and their application in computed tomography.

    PubMed

    Karimi, Davood; Ward, Rabab K

    2016-10-01

    Image models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, "patch-based" models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT. We first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT. Patch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated. Patch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.

  10. Research on computer virus database management system

    NASA Astrophysics Data System (ADS)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  11. Energy Efficiency Challenges of 5G Small Cell Networks.

    PubMed

    Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang

    2017-05-01

    The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks.

  12. Energy Efficiency Challenges of 5G Small Cell Networks

    PubMed Central

    Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang

    2017-01-01

    The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. PMID:28757670

  13. Energy Use and Power Levels in New Monitors and Personal Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay

    2002-07-23

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC). Cur rent ENERGY STAR monitor and computer criteria do not specify off or on power, but our results suggest opportunities for saving energy in these modes. Also, significant differences between CRT and LCD technology, and between field-measured and manufacturer-reported power levels reveal the need for standard methods and metrics for measuring and comparing monitor power consumption.« less

  14. Beam and Plasma Physics Research

    DTIC Science & Technology

    1990-06-01

    La di~raDy in high power microwave computations and thi-ory and high energy plasma computations and theory. The HPM computations concentrated on...2.1 REPORT INDEX 7 2.2 TASK AREA 2: HIGH-POWER RF EMISSION AND CHARGED- PARTICLE BEAM PHYSICS COMPUTATION , MODELING AND THEORY 10 2.2.1 Subtask 02-01...Vulnerability of Space Assets 22 2.2.6 Subtask 02-06, Microwave Computer Program Enhancements 22 2.2.7 Subtask 02-07, High-Power Microwave Transvertron Design 23

  15. 3-D Electromagnetic field analysis of wireless power transfer system using K computer

    NASA Astrophysics Data System (ADS)

    Kawase, Yoshihiro; Yamaguchi, Tadashi; Murashita, Masaya; Tsukada, Shota; Ota, Tomohiro; Yamamoto, Takeshi

    2018-05-01

    We analyze the electromagnetic field of a wireless power transfer system using the 3-D parallel finite element method on K computer, which is a super computer in Japan. It is clarified that the electromagnetic field of the wireless power transfer system can be analyzed in a practical time using the parallel computation on K computer, moreover, the accuracy of the loss calculation becomes better as the mesh division of the shield becomes fine.

  16. Optimizing Data Centre Energy and Environmental Costs

    NASA Astrophysics Data System (ADS)

    Aikema, David Hendrik

    Data centres use an estimated 2% of US electrical power which accounts for much of their total cost of ownership. This consumption continues to grow, further straining power grids attempting to integrate more renewable energy. This dissertation focuses on assessing and reducing data centre environmental and financial costs. Emissions of projects undertaken to lower the data centre environmental footprints can be assessed and the emission reduction projects compared using an ISO-14064-2-compliant greenhouse gas reduction protocol outlined herein. I was closely involved with the development of the protocol. Full lifecycle analysis and verifying that projects exceed business-as-usual expectations are addressed, and a test project is described. Consuming power when it is low cost or when renewable energy is available can be used to reduce the financial and environmental costs of computing. Adaptation based on the power price showed 10--50% potential savings in typical cases, and local renewable energy use could be increased by 10--80%. Allowing a fraction of high-priority tasks to proceed unimpeded still allows significant savings. Power grid operators use mechanisms called ancillary services to address variation and system failures, paying organizations to alter power consumption on request. By bidding to offer these services, data centres may be able to lower their energy costs while reducing their environmental impact. If providing contingency reserves which require only infrequent action, savings of up to 12% were seen in simulations. Greater power cost savings are possible for those ceding more control to the power grid operator. Coordinating multiple data centres adds overhead, and altering at which data centre requests are processed based on changes in the financial or environmental costs of power is likely to increase this overhead. Tests of virtual machine migrations showed that in some cases there was no visible increase in power use while in others power use rose by 20--30W. Estimates of how migration was likely to impact other services used in current cloud environments were derived.

  17. SPECT Perfusion Imaging Demonstrates Improvement of Traumatic Brain Injury With Transcranial Near-infrared Laser Phototherapy.

    PubMed

    Henderson, Theodore A; Morries, Larry D

    2015-01-01

    Traumatic brain injury (TBI) is a growing health concern affecting civilians and military personnel. Near-infrared (NIR) light has shown benefits in animal models and human trials for stroke and in animal models for TBI. Diodes emitting low-level NIR often have lacked therapeutic efficacy, perhaps failing to deliver sufficient radiant energy to the necessary depth. In this case report, a patient with moderate TBI documented in anatomical magnetic resonance imaging (MRI) and perfusion single-photon emission computed tomography (SPECT) received 20 NIR treatments in the course of 2 mo using a high-power NIR laser. Symptoms were monitored by clinical examination and a novel patient diary system specifically designed for this patient population. Clinical application of these levels of infrared energy for this patient with TBI yielded highly favorable outcomes with decreased depression, anxiety, headache, and insomnia, whereas cognition and quality of life improved. Neurological function appeared to improve based on changes in the SPECT by quantitative analysis. NIR in the power range of 10-15 W at 810 and 980 nm can safely and effectively treat chronic symptoms of TBI.

  18. Electricity Capacity Expansion Modeling, Analysis, and Visualization. A Summary of High-Renewable Modeling Experience for China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blair, Nate; Zhou, Ella; Getman, Dan

    2015-10-01

    Mathematical and computational models are widely used for the analysis and design of both physical and financial systems. Modeling the electric grid is of particular importance to China for three reasons. First, power-sector assets are expensive and long-lived, and they are critical to any country's development. China's electric load, transmission, and other energy-related infrastructure are expected to continue to grow rapidly; therefore it is crucial to understand and help plan for the future in which those assets will operate (NDRC ERI 2015). Second, China has dramatically increased its deployment of renewable energy (RE), and is likely to continue further acceleratingmore » such deployment over the coming decades. Careful planning and assessment of the various aspects (technical, economic, social, and political) of integrating a large amount of renewables on the grid is required. Third, companies need the tools to develop a strategy for their own involvement in the power market China is now developing, and to enable a possible transition to an efficient and high RE future.« less

  19. Bayesian-Driven First-Principles Calculations for Accelerating Exploration of Fast Ion Conductors for Rechargeable Battery Application.

    PubMed

    Jalem, Randy; Kanamori, Kenta; Takeuchi, Ichiro; Nakayama, Masanobu; Yamasaki, Hisatsugu; Saito, Toshiya

    2018-04-11

    Safe and robust batteries are urgently requested today for power sources of electric vehicles. Thus, a growing interest has been noted for fabricating those with solid electrolytes. Materials search by density functional theory (DFT) methods offers great promise for finding new solid electrolytes but the evaluation is known to be computationally expensive, particularly on ion migration property. In this work, we proposed a Bayesian-optimization-driven DFT-based approach to efficiently screen for compounds with low ion migration energies ([Formula: see text]. We demonstrated this on 318 tavorite-type Li- and Na-containing compounds. We found that the scheme only requires ~30% of the total DFT-[Formula: see text] evaluations on the average to recover the optimal compound ~90% of the time. Its recovery performance for desired compounds in the tavorite search space is ~2× more than random search (i.e., for [Formula: see text] < 0.3 eV). Our approach offers a promising way for addressing computational bottlenecks in large-scale material screening for fast ionic conductors.

  20. Agent-Based Intelligent Interface for Wheelchair Movement Control

    PubMed Central

    Barriuso, Alberto L.; De Paz, Juan F.

    2018-01-01

    People who suffer from any kind of motor difficulty face serious complications to autonomously move in their daily lives. However, a growing number research projects which propose different powered wheelchairs control systems are arising. Despite of the interest of the research community in the area, there is no platform that allows an easy integration of various control methods that make use of heterogeneous sensors and computationally demanding algorithms. In this work, an architecture based on virtual organizations of agents is proposed that makes use of a flexible and scalable communication protocol that allows the deployment of embedded agents in computationally limited devices. In order to validate the proper functioning of the proposed system, it has been integrated into a conventional wheelchair and a set of alternative control interfaces have been developed and deployed, including a portable electroencephalography system, a voice interface or as specifically designed smartphone application. A set of tests were conducted to test both the platform adequacy and the accuracy and ease of use of the proposed control systems yielding positive results that can be useful in further wheelchair interfaces design and implementation. PMID:29751603

  1. Solution-Processed Carbon Nanotube True Random Number Generator.

    PubMed

    Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C

    2017-08-09

    With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.

  2. Computer program analyzes and monitors electrical power systems (POSIMO)

    NASA Technical Reports Server (NTRS)

    Jaeger, K.

    1972-01-01

    Requirements to monitor and/or simulate electric power distribution, power balance, and charge budget are discussed. Computer program to analyze power system and generate set of characteristic power system data is described. Application to status indicators to denote different exclusive conditions is presented.

  3. Coal and Coal/Biomass-Based Power Generation

    EPA Science Inventory

    For Frank Princiotta's book, Global Climate Change--The Technology Challenge Coal is a key, growing component in power generation globally. It generates 50% of U.S. electricity, and criteria emissions from coal-based power generation are being reduced. However, CO2 emissions m...

  4. Online, offline, realtime: recent developments in industrial photogrammetry

    NASA Astrophysics Data System (ADS)

    Boesemann, Werner

    2003-01-01

    In recent years industrial photogrammetry has emerged from a highly specialized niche technology to a well established tool in industrial coordinate measurement applications with numerous installations in a significantly growing market of flexible and portable optical measurement systems. This is due to the development of powerful, but affordable video and computer technology. The increasing industrial requirements for accuracy, speed, robustness and ease of use of these systems together with a demand for the highest possible degree of automation have forced universities and system manufacturer to develop hard- and software solutions to meet these requirements. The presentation will show the latest trends in hardware development, especially new generation digital and/or intelligent cameras, aspects of image engineering like use of controlled illumination or projection technologies, and algorithmic and software aspects like automation strategies or new camera models. The basic qualities of digital photogrammetry- like portability and flexibility on one hand and fully automated quality control on the other - sometimes lead to certain conflicts in the design of measurement systems for different online, offline, or real-time solutions. The presentation will further show, how these tools and methods are combined in different configurations to be able to cover the still growing demands of the industrial end-users.

  5. Photogrammetry in the line: recent developments in industrial photogrammetry

    NASA Astrophysics Data System (ADS)

    Boesemann, Werner

    2003-05-01

    In recent years industrial photogrammetry has emerged from a highly specialized niche technology to a well established tool in industrial coordinate measurement applications with numerous installations in a significantly growing market of flexible and portable optical measurement systems. This is due to the development of powerful, but affordable video and computer technology. The increasing industrial requirements for accuracy, speed, robustness and ease of use of these systems together with a demand for the highest possible degree of automation have forced universities and system manufacturers to develop hard- and software solutions to meet these requirements. The presentation will show the latest trends in hardware development, especially new generation digital and/or intelligent cameras, aspects of image engineering like use of controlled illumination or projection technologies,and algorithmic and software aspects like automation strategies or new camera models. The basic qualities of digital photogrammetry-like portability and flexibility on one hand and fully automated quality control on the other -- sometimes lead to certain conflicts in the design of measurement systems for different online, offline or real-time solutions. The presentation will further show, how these tools and methods are combined in different configurations to be able to cover the still growing demands of the industrial end-users.

  6. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  7. Space station electrical power distribution analysis using a load flow approach

    NASA Technical Reports Server (NTRS)

    Emanuel, Ervin M.

    1987-01-01

    The space station's electrical power system will evolve and grow in a manner much similar to the present terrestrial electrical power system utilities. The initial baseline reference configuration will contain more than 50 nodes or busses, inverters, transformers, overcurrent protection devices, distribution lines, solar arrays, and/or solar dynamic power generating sources. The system is designed to manage and distribute 75 KW of power single phase or three phase at 20 KHz, and grow to a level of 300 KW steady state, and must be capable of operating at a peak of 450 KW for 5 to 10 min. In order to plan far into the future and keep pace with load growth, a load flow power system analysis approach must be developed and utilized. This method is a well known energy assessment and management tool that is widely used throughout the Electrical Power Utility Industry. The results of a comprehensive evaluation and assessment of an Electrical Distribution System Analysis Program (EDSA) is discussed. Its potential use as an analysis and design tool for the 20 KHz space station electrical power system is addressed.

  8. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  9. Evaluation of reinitialization-free nonvolatile computer systems for energy-harvesting Internet of things applications

    NASA Astrophysics Data System (ADS)

    Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro

    2017-08-01

    In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.

  10. Balancing computation and communication power in power constrained clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piga, Leonardo; Paul, Indrani; Huang, Wei

    Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agentmore » may be configured to reassign the unused power to the active nodes to expedite workload processing.« less

  11. Computer vision in the poultry industry

    USDA-ARS?s Scientific Manuscript database

    Computer vision is becoming increasingly important in the poultry industry due to increasing use and speed of automation in processing operations. Growing awareness of food safety concerns has helped add food safety inspection to the list of tasks that automated computer vision can assist. Researc...

  12. #WomenInSTEM: Using Science & Math to Power the Globe

    ScienceCinema

    Jordan, Rhonda

    2018-01-16

    Growing up, Dr. Rhonda Jordan always enjoyed math and science. After completing her master's in electrical engineering at Columbia University she co-founded a startup in Tanzania that provides access to power for residents who are not connected to the electrical grid.This video is part of the Energy Department's #WomenInSTEM video series. At the Energy Department, we're committed to supporting a diverse talent pool of STEM innovators ready to address the challenges and opportunities of our growing clean energy economy.

  13. Virtual Reality versus Computer-Aided Exposure Treatments for Fear of Flying

    ERIC Educational Resources Information Center

    Tortella-Feliu, Miquel; Botella, Cristina; Llabres, Jordi; Breton-Lopez, Juana Maria; del Amo, Antonio Riera; Banos, Rosa M.; Gelabert, Joan M.

    2011-01-01

    Evidence is growing that two modalities of computer-based exposure therapies--virtual reality and computer-aided psychotherapy--are effective in treating anxiety disorders, including fear of flying. However, they have not yet been directly compared. The aim of this study was to analyze the efficacy of three computer-based exposure treatments for…

  14. Case Studies of Auditing in a Computer-Based Systems Environment.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC.

    In response to a growing need for effective and efficient means for auditing computer-based systems, a number of studies dealing primarily with batch-processing type computer operations have been conducted to explore the impact of computers on auditing activities in the Federal Government. This report first presents some statistical data on…

  15. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; Russell, Samuel S.

    2012-01-01

    Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.

  16. Toward increased reliability in the electric power industry: direct temperature measurement in transformers using fiber optic sensors

    NASA Astrophysics Data System (ADS)

    McDonald, Greg

    1998-09-01

    Optimal loading, prevention of catastrophic failures and reduced maintenance costs are some of the benefits of accurate determination of hot spot winding temperatures in medium and high power transformers. Temperature estimates obtained using current theoretical models are not always accurate. Traditional technology (IR, thermocouples...) are unsuitable or inadequate for direct measurement. Nortech fiber-optic temperature sensors offer EMI immunity and chemical resistance and are a proven solution to the problem. The Nortech sensor's measurement principle is based on variations in the spectral absorption of a fiber-mounted semiconductor chip and probes are interchangeable with no need for recalibration. Total length of probe + extension can be up to several hundred meters allowing system electronics to be located in the control room or mounted in the transformer instrumentation cabinet. All of the sensor materials withstand temperatures up to 250 degree(s)C and have demonstrated excellent resistance to the harsh transformer environment (hot oil, kerosene). Thorough study of the problem and industry collaboration in testing and installation allows Nortech to identify and meet the need for durable probes, leak-proof feedthroughs, standard computer interfaces and measurement software. Refined probe technology, the method's simplicity and reliable calibration are all assets that should lead to growing acceptance of this type of direct measuring in the electric power industry.

  17. Costa - Introduction to 2015 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, James E.

    In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less

  18. Domain fusion analysis by applying relational algebra to protein sequence and domain databases.

    PubMed

    Truong, Kevin; Ikura, Mitsuhiko

    2003-05-06

    Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.

  19. Teaching and Learning Physics in a 1:1 Laptop School

    NASA Astrophysics Data System (ADS)

    Zucker, Andrew A.; Hug, Sarah T.

    2008-12-01

    1:1 laptop programs, in which every student is provided with a personal computer to use during the school year, permit increased and routine use of powerful, user-friendly computer-based tools. Growing numbers of 1:1 programs are reshaping the roles of teachers and learners in science classrooms. At the Denver School of Science and Technology, a public charter high school where a large percentage of students come from low-income families, 1:1 laptops are used often by teachers and students. This article describes the school's use of laptops, the Internet, and related digital tools, especially for teaching and learning physics. The data are from teacher and student surveys, interviews, classroom observations, and document analyses. Physics students and teachers use an interactive digital textbook; Internet-based simulations (some developed by a Nobel Prize winner); word processors; digital drop boxes; email; formative electronic assessments; computer-based and stand-alone graphing calculators; probes and associated software; and digital video cameras to explore hypotheses, collaborate, engage in scientific inquiry, and to identify strengths and weaknesses of students' understanding of physics. Technology provides students at DSST with high-quality tools to explore scientific concepts and the experiences of teachers and students illustrate effective uses of digital technology for high school physics.

  20. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  1. SCNS: a graphical tool for reconstructing executable regulatory networks from single-cell genomic data.

    PubMed

    Woodhouse, Steven; Piterman, Nir; Wintersteiger, Christoph M; Göttgens, Berthold; Fisher, Jasmin

    2018-05-25

    Reconstruction of executable mechanistic models from single-cell gene expression data represents a powerful approach to understanding developmental and disease processes. New ambitious efforts like the Human Cell Atlas will soon lead to an explosion of data with potential for uncovering and understanding the regulatory networks which underlie the behaviour of all human cells. In order to take advantage of this data, however, there is a need for general-purpose, user-friendly and efficient computational tools that can be readily used by biologists who do not have specialist computer science knowledge. The Single Cell Network Synthesis toolkit (SCNS) is a general-purpose computational tool for the reconstruction and analysis of executable models from single-cell gene expression data. Through a graphical user interface, SCNS takes single-cell qPCR or RNA-sequencing data taken across a time course, and searches for logical rules that drive transitions from early cell states towards late cell states. Because the resulting reconstructed models are executable, they can be used to make predictions about the effect of specific gene perturbations on the generation of specific lineages. SCNS should be of broad interest to the growing number of researchers working in single-cell genomics and will help further facilitate the generation of valuable mechanistic insights into developmental, homeostatic and disease processes.

  2. Quantization and training of object detection networks with low-precision weights and activations

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  3. System-wide power management control via clock distribution network

    DOEpatents

    Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.

    2015-05-19

    An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.

  4. Reducing power consumption while performing collective operations on a plurality of compute nodes

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  5. Impedance-based structural health monitoring of wind turbine blades

    NASA Astrophysics Data System (ADS)

    Pitchford, Corey; Grisso, Benjamin L.; Inman, Daniel J.

    2007-04-01

    Wind power is a fast-growing source of non-polluting, renewable energy with vast potential. However, current wind turbine technology must be improved before the potential of wind power can be fully realized. Wind turbine blades are one of the key components in improving this technology. Blade failure is very costly because it can damage other blades, the wind turbine itself, and possibly other wind turbines. A successful damage detection system incorporated into wind turbines could extend blade life and allow for less conservative designs. A damage detection method which has shown promise on a wide variety of structures is impedance-based structural health monitoring. The technique utilizes small piezoceramic (PZT) patches attached to a structure as self-sensing actuators to both excite the structure with high-frequency excitations, and monitor any changes in structural mechanical impedance. By monitoring the electrical impedance of the PZT, assessments can be made about the integrity of the mechanical structure. Recently, advances in hardware systems with onboard computing, including actuation and sensing, computational algorithms, and wireless telemetry, have improved the accessibility of the impedance method for in-field measurements. This paper investigates the feasibility of implementing such an onboard system inside of turbine blades as an in-field method of damage detection. Viability of onboard detection is accomplished by running a series of tests to verify the capability of the method on an actual wind turbine blade section from an experimental carbon/glass/balsa composite blade developed at Sandia National Laboratories.

  6. Identification of the actual state and entity availability forecasting in power engineering using neural-network technologies

    NASA Astrophysics Data System (ADS)

    Protalinsky, O. M.; Shcherbatov, I. A.; Stepanov, P. V.

    2017-11-01

    A growing number of severe accidents in RF call for the need to develop a system that could prevent emergency situations. In a number of cases accident rate is stipulated by careless inspections and neglects in developing repair programs. Across the country rates of accidents are growing because of a so-called “human factor”. In this regard, there has become urgent the problem of identification of the actual state of technological facilities in power engineering using data on engineering processes running and applying artificial intelligence methods. The present work comprises four model states of manufacturing equipment of engineering companies: defect, failure, preliminary situation, accident. Defect evaluation is carried out using both data from SCADA and ASEPCR and qualitative information (verbal assessments of experts in subject matter, photo- and video materials of surveys processed using pattern recognition methods in order to satisfy the requirements). Early identification of defects makes possible to predict the failure of manufacturing equipment using mathematical techniques of artificial neural network. In its turn, this helps to calculate predicted characteristics of reliability of engineering facilities using methods of reliability theory. Calculation of the given parameters provides the real-time estimation of remaining service life of manufacturing equipment for the whole operation period. The neural networks model allows evaluating possibility of failure of a piece of equipment consistent with types of actual defects and their previous reasons. The article presents the grounds for a choice of training and testing samples for the developed neural network, evaluates the adequacy of the neural networks model, and shows how the model can be used to forecast equipment failure. There have been carried out simulating experiments using a computer and retrospective samples of actual values for power engineering companies. The efficiency of the developed model for different types of manufacturing equipment has been proved. There have been offered other research areas in terms of the presented subject matter.

  7. Women's Organizations in Rural Development. Women in Development.

    ERIC Educational Resources Information Center

    Staudt, Kathleen A.

    Political power tends to overlap with economic power, thus favoring those with access to land, livestock, capital, and other productive resources; in virtually all societies women have fewer of those productive resources than men, which reflects and explains women's limited political power. Growing documentation indicates that men…

  8. Top 10 Threats to Computer Systems Include Professors and Students

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  9. The Role of Physicality in Rich Programming Environments

    ERIC Educational Resources Information Center

    Liu, Allison S.; Schunn, Christian D.; Flot, Jesse; Shoop, Robin

    2013-01-01

    Computer science proficiency continues to grow in importance, while the number of students entering computer science-related fields declines. Many rich programming environments have been created to motivate student interest and expertise in computer science. In the current study, we investigated whether a recently created environment, Robot…

  10. A System for Managing Replenishment of a Nutrient Solution Using an Electrical Conductivity Controller

    NASA Technical Reports Server (NTRS)

    Davis, D.; Dogan, N.; Aglan, H.; Mortley, D.; Loretan, P.

    1998-01-01

    Control of nutrient solution parameters is very important for the growth and development of plants grown hydroponically. Protocols involving different nutrient solution replenishment times (e.g. one-week, two-week, or two-day replenishment) provide manual periodic control of the nutrient solution's electrical conductivity (EC). Since plants take-up nutrients as they grow, manual control has a drawback in that EC is not held constant between replenishments. In an effort to correct this problem the Center for Food and Environmental Systems for Human Exploration of Space at Tuskegee University has developed a system for managing and controlling levels of EC over a plant's entire growing cycle. A prototype system is being tested on sweetpotato production using the nutrient film technique (NFT), and it is being compared to a system in which sweetpotatoes are grown using NFT with manual control. NASA has played an important role in the development of environmental control systems. They have become a forerunner in growing plants hydroponically with some control systems through the use of networked data acquisition and control using environmental growth chambers. Data acquisition systems which involve the use of real-time, calibration, set points, user panel, and graphical representation programming provide a good method of controlling nutrient solution parameters such as EC and pH [Bledsoe, 19931]. In NASA's Biomass Production Chamber (BPC) at Kennedy Space Center, control is provided by a programmable logic controller (PLC). This is an industrial controller which combines ladder computer logic which has the ability to handle various levels of electrical power. The controller controls temperature, light and other parameters that affect the plant's environment, in the BPC, the Nutrient Delivery System (NIX), a sub-system of the PLC, controls nutrient solution parameters such as EC, pH, and solution levels. When the nutrient EC measurement goes outside a preset range (120-130 mS/m) a set amount of a stock solution of nutrients is automatically added by a metering pump to bring the EC back into operating range [Fortson, 1992]. This paper describes a system developed at Tuskegee University for controlling the EC of a nutrient solution used for growing sweetpotatoes with an EC controller and a computer with LabView data acquisition and instrumentation software. It also describes the preliminary data obtained from the growth of sweetpotatoes using this prototype control system.

  11. Power increase at first ProCure21 project.

    PubMed

    2007-06-01

    Throughout the UK, hospitals are embarking on new-build programmes to meet the needs of growing communities. However, site acreage and accommodation are often already under pressure, as Dale Power Solutions found when it was commissioned to increase standby power availability at the Friarage Hospital in Northallerton, North Yorkshire.

  12. Tactile Radar: experimenting a computer game with visually disabled.

    PubMed

    Kastrup, Virgínia; Cassinelli, Alvaro; Quérette, Paulo; Bergstrom, Niklas; Sampaio, Eliana

    2017-09-18

    Visually disabled people increasingly use computers in everyday life, thanks to novel assistive technologies better tailored to their cognitive functioning. Like sighted people, many are interested in computer games - videogames and audio-games. Tactile-games are beginning to emerge. The Tactile Radar is a device through which a visually disabled person is able to detect distal obstacles. In this study, it is connected to a computer running a tactile-game. The game consists in finding and collecting randomly arranged coins in a virtual room. The study was conducted with nine congenital blind people including both sexes, aged 20-64 years old. Complementary methods of first and third person were used: the debriefing interview and the quasi-experimental design. The results indicate that the Tactile Radar is suitable for the creation of computer games specifically tailored for visually disabled people. Furthermore, the device seems capable of eliciting a powerful immersive experience. Methodologically speaking, this research contributes to the consolidation and development of first and third person complementary methods, particularly useful in disabled people research field, including the evaluation by users of the Tactile Radar effectiveness in a virtual reality context. Implications for rehabilitation Despite the growing interest in virtual games for visually disabled people, they still find barriers to access such games. Through the development of assistive technologies such as the Tactile Radar, applied in virtual games, we can create new opportunities for leisure, socialization and education for visually disabled people. The results of our study indicate that the Tactile Radar is adapted to the creation of video games for visually disabled people, providing a playful interaction with the players.

  13. Computational Modeling for Language Acquisition: A Tutorial with Syntactic Islands

    ERIC Educational Resources Information Center

    Pearl, Lisa S.; Sprouse, Jon

    2015-01-01

    Purpose: Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. Method: We provide a general…

  14. Efficient Predictions of Excited State for Nanomaterials Using Aces 3 and 4

    DTIC Science & Technology

    2017-12-20

    by first-principle methods in the software package ACES by using large parallel computers, growing tothe exascale. 15. SUBJECT TERMS Computer...modeling, excited states, optical properties, structure, stability, activation barriers first principle methods , parallel computing 16. SECURITY...2 Progress with new density functional methods

  15. A Framework for Understanding Physics Students' Computational Modeling Practices

    ERIC Educational Resources Information Center

    Lunk, Brandon Robert

    2012-01-01

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content…

  16. Computing and Engineering in Afterschool. Afterschool Alert. Issue Brief No. 62

    ERIC Educational Resources Information Center

    Afterschool Alliance, 2013

    2013-01-01

    This Afterschool Alert Issue Brief explores how afterschool programs are offering innovative, hands-on computing and engineering education opportunities. Both these subjects have emerged as priority areas within the "STEM" fields. Computing is one of the fastest growing industries, and yet current rates of college graduation in computer…

  17. The Impact of User Interface on Young Children's Computational Thinking

    ERIC Educational Resources Information Center

    Pugnali, Alex; Sullivan, Amanda; Bers, Marina Umaschi

    2017-01-01

    Aim/Purpose: Over the past few years, new approaches to introducing young children to computational thinking have grown in popularity. This paper examines the role that user interfaces have on children's mastery of computational thinking concepts and positive interpersonal behaviors. Background: There is a growing pressure to begin teaching…

  18. Waiting for the Return. Maximizing Investments in Technology.

    ERIC Educational Resources Information Center

    Workforce Economics, 1996

    1996-01-01

    Investments in technology and the number of workers using computers are growing quickly and at an increasing rate. From 1990-1995, investments in computers and related equipment tripled. Real (inflation-adjusted dollars) investments in computers and peripheral equipment increased from $200 million in 1973 to $91.6 billion in 1995. Increasing…

  19. A fast 3D region growing approach for CT angiography applications

    NASA Astrophysics Data System (ADS)

    Ye, Zhen; Lin, Zhongmin; Lu, Cheng-chang

    2004-05-01

    Region growing is one of the most popular methods for low-level image segmentation. Many researches on region growing have focused on the definition of the homogeneity criterion or growing and merging criterion. However, one disadvantage of conventional region growing is redundancy. It requires a large memory usage, and the computation-efficiency is very low especially for 3D images. To overcome this problem, a non-recursive single-pass 3D region growing algorithm named SymRG is implemented and successfully applied to 3D CT angiography (CTA) applications for vessel segmentation and bone removal. The method consists of three steps: segmenting one-dimensional regions of each row; doing region merging to adjacent rows to obtain the region segmentation of each slice; and doing region merging to adjacent slices to obtain the final region segmentation of 3D images. To improve the segmentation speed for very large volume 3D CTA images, this algorithm is applied repeatedly to newly updated local cubes. The next new cube can be estimated by checking isolated segmented regions on all 6 faces of the current local cube. This local non-recursive 3D region-growing algorithm is memory-efficient and computation-efficient. Clinical testings of this algorithm on Brain CTA show this technique could effectively remove whole skull, most of the bones on the skull base, and reveal the cerebral vascular structures clearly.

  20. GPS synchronized power system phase angle measurements

    NASA Astrophysics Data System (ADS)

    Wilson, Robert E.; Sterlina, Patrick S.

    1994-09-01

    This paper discusses the use of Global Positioning System (GPS) synchronized equipment for the measurement and analysis of key power system quantities. Two GPS synchronized phasor measurement units (PMU) were installed before testing. It was indicated that PMUs recorded the dynamic response of the power system phase angles when the northern California power grid was excited by the artificial short circuits. Power system planning engineers perform detailed computer generated simulations of the dynamic response of the power system to naturally occurring short circuits. The computer simulations use models of transmission lines, transformers, circuit breakers, and other high voltage components. This work will compare computer simulations of the same event with field measurement.

  1. Self-focusing of ultraintense femtosecond optical vortices in air.

    PubMed

    Polynkin, P; Ament, C; Moloney, J V

    2013-07-12

    Our experiments show that the critical power for self-focusing collapse of femtosecond vortex beams in air is significantly higher than that of a flattop beam and grows approximately linearly with the vortex order. With less than 10% of initial transverse intensity modulation of the beam profiles, the dominant mode of self-focusing collapse is the azimuthal breakup of the vortex rings into individual filaments, the number of which grows with the input beam power. The generated bottlelike distributions of plasma filaments rotate on propagation in the direction determined by the sense of vorticity.

  2. Believer-Skeptic Meets Actor-Critic: Rethinking the Role of Basal Ganglia Pathways during Decision-Making and Reinforcement Learning

    PubMed Central

    Dunovan, Kyle; Verstynen, Timothy

    2016-01-01

    The flexibility of behavioral control is a testament to the brain's capacity for dynamically resolving uncertainty during goal-directed actions. This ability to select actions and learn from immediate feedback is driven by the dynamics of basal ganglia (BG) pathways. A growing body of empirical evidence conflicts with the traditional view that these pathways act as independent levers for facilitating (i.e., direct pathway) or suppressing (i.e., indirect pathway) motor output, suggesting instead that they engage in a dynamic competition during action decisions that computationally captures action uncertainty. Here we discuss the utility of encoding action uncertainty as a dynamic competition between opposing control pathways and provide evidence that this simple mechanism may have powerful implications for bridging neurocomputational theories of decision making and reinforcement learning. PMID:27047328

  3. Active dendrites: colorful wings of the mysterious butterflies.

    PubMed

    Johnston, Daniel; Narayanan, Rishikesh

    2008-06-01

    Santiago Ramón y Cajal had referred to neurons as the 'mysterious butterflies of the soul.' Wings of these butterflies--their dendrites--were traditionally considered as passive integrators of synaptic information. Owing to a growing body of experimental evidence, it is now widely accepted that these wings are colorful, endowed with a plethora of active conductances, with each family of these butterflies made of distinct hues and shades. Furthermore, rapidly evolving recent literature also provides direct and indirect demonstrations for activity-dependent plasticity of these active conductances, pointing toward chameleonic adaptability in these hues. These experimental findings firmly establish the immense computational power of a single neuron, and thus constitute a turning point toward the understanding of various aspects of neuronal information processing. In this brief historical perspective, we track important milestones in the chameleonic transmogrification of these mysterious butterflies.

  4. Believer-Skeptic Meets Actor-Critic: Rethinking the Role of Basal Ganglia Pathways during Decision-Making and Reinforcement Learning.

    PubMed

    Dunovan, Kyle; Verstynen, Timothy

    2016-01-01

    The flexibility of behavioral control is a testament to the brain's capacity for dynamically resolving uncertainty during goal-directed actions. This ability to select actions and learn from immediate feedback is driven by the dynamics of basal ganglia (BG) pathways. A growing body of empirical evidence conflicts with the traditional view that these pathways act as independent levers for facilitating (i.e., direct pathway) or suppressing (i.e., indirect pathway) motor output, suggesting instead that they engage in a dynamic competition during action decisions that computationally captures action uncertainty. Here we discuss the utility of encoding action uncertainty as a dynamic competition between opposing control pathways and provide evidence that this simple mechanism may have powerful implications for bridging neurocomputational theories of decision making and reinforcement learning.

  5. MicroRNAs and complex diseases: from experimental results to computational models.

    PubMed

    Chen, Xing; Xie, Di; Zhao, Qi; You, Zhu-Hong

    2017-10-17

    Plenty of microRNAs (miRNAs) were discovered at a rapid pace in plants, green algae, viruses and animals. As one of the most important components in the cell, miRNAs play a growing important role in various essential and important biological processes. For the recent few decades, amounts of experimental methods and computational models have been designed and implemented to identify novel miRNA-disease associations. In this review, the functions of miRNAs, miRNA-target interactions, miRNA-disease associations and some important publicly available miRNA-related databases were discussed in detail. Specially, considering the important fact that an increasing number of miRNA-disease associations have been experimentally confirmed, we selected five important miRNA-related human diseases and five crucial disease-related miRNAs and provided corresponding introductions. Identifying disease-related miRNAs has become an important goal of biomedical research, which will accelerate the understanding of disease pathogenesis at the molecular level and molecular tools design for disease diagnosis, treatment and prevention. Computational models have become an important means for novel miRNA-disease association identification, which could select the most promising miRNA-disease pairs for experimental validation and significantly reduce the time and cost of the biological experiments. Here, we reviewed 20 state-of-the-art computational models of predicting miRNA-disease associations from different perspectives. Finally, we summarized four important factors for the difficulties of predicting potential disease-related miRNAs, the framework of constructing powerful computational models to predict potential miRNA-disease associations including five feasible and important research schemas, and future directions for further development of computational models. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Planning and management of cloud computing networks

    NASA Astrophysics Data System (ADS)

    Larumbe, Federico

    The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a comprehensive vision. The first question to be solved is what are the optimal data center locations. We found that the location of each data center has a big impact on cost, QoS, power consumption, and greenhouse gas emissions. An optimization problem with a multi-criteria objective function is proposed to decide jointly the optimal location of data centers and software components, link capacities, and information routing. Once the network planning has been analyzed, the problem of dynamic resource provisioning in real time is addressed. In this context, virtualization is a key technique in cloud computing because each server can be shared by multiple Virtual Machines (VMs) and the total power consumption can be reduced. In the same line of location problems, we propose a Green Cloud Broker that optimizes VM placement across multiple data centers. In fact, when multiple data centers are considered, response time can be reduced by placing VMs close to users, cost can be minimized, power consumption can be optimized by using energy efficient data centers, and CO2 emissions can be decreased by choosing data centers provided with renewable energy sources. The third stage of the analysis is the short-term management of a cloud data center. In particular, a method is proposed to assign VMs to servers by considering communication traffic among VMs. Cloud data centers receive new applications over time and these applications need on-demand resource provisioning. Each application is composed of multiple types of VMs that interact among themselves. A program called scheduler must place each new VM in a server and that impacts the QoS and power consumption. Our method places VMs that communicate among themselves in servers that are close to each other in the network topology, thus reducing communication delay and increasing the throughput available among VMs. Furthermore, the power consumption of each type of server is considered and the most efficient ones are chosen to place the VMs. The number of VMs of each application can be dynamically changed to match the workload and servers not needed in a particular period can be suspended to save energy. The methodology developed is based on Mixed Integer Programming (MIP) models to formalize the problems and use state of the art optimization solvers. Then, heuristics are developed to solve cases with more than 1,000 potential data center locations for the planning problem, 1,000 nodes for the cloud broker, and 128,000 servers for the VM placement problem. Solutions with very short optimality gaps, between 0% and 1.95%, are obtained, and execution time in the order of minutes for the planning problem and less than a second for real time cases. We consider that this thesis on resource provisioning of cloud computing networks includes important contributions on this research area, and innovative commercial applications based on the proposed methods have promising future.

  7. Reliability of quantitative EEG (qEEG) measures and LORETA current source density at 30 days.

    PubMed

    Cannon, Rex L; Baldwin, Debora R; Shaw, Tiffany L; Diloreto, Dominic J; Phillips, Sherman M; Scruggs, Annie M; Riehl, Timothy C

    2012-06-14

    There is a growing interest for using quantitative EEG and LORETA current source density in clinical and research settings. Importantly, if these indices are to be employed in clinical settings then the reliability of these measures is of great concern. Neuroguide (Applied Neurosciences) is sophisticated software developed for the analyses of power, and connectivity measures of the EEG as well as LORETA current source density. To date there are relatively few data evaluating topographical EEG reliability contrasts for all 19 channels and no studies have evaluated reliability for LORETA calculations. We obtained 4 min eyes-closed and eyes-opened EEG recordings at 30-day intervals. The EEG was analyzed in Neuroguide and FFT power, coherence and phase was computed for traditional frequency bands (delta, theta, alpha and beta) and LORETA current source density was calculated in 1 Hz increments and summed for total power in eight regions of interest (ROI). In order to obtain a robust measure of reliability we utilized a random effects model with an absolute agreement definition. The results show very good reproducibility for total absolute power and coherence. Phase shows lower reliability coefficients. LORETA current source density shows very good reliability with an average 0.81 for ECB and 0.82 for EOB. Similarly, the eight regions of interest show good to very good agreement across time. Implications for future directions and use of qEEG and LORETA in clinical populations are discussed. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. The Importance of Outreach Programs to Unblock the Pipeline and Broaden Diversity in ICT Education

    ERIC Educational Resources Information Center

    Lang, Catherine; Craig, Annemieke; Egan, Mary Anne

    2016-01-01

    There is a need for outreach programs to attract a diverse range of students to the computing discipline. The lack of qualified computing graduates to fill the growing number of computing vacancies is of concern to government and industry and there are few female students entering the computing pipeline at high school level. This paper presents…

  9. Tenth Grade Students' Time Using a Computer as a Predictor of the Highest Level of Education Attempted

    ERIC Educational Resources Information Center

    Gaffey, Adam John

    2014-01-01

    As computing technology continued to grow in the lives of secondary students from 2002 to 2006, researchers failed to identify the influence using computers would have on the highest level of education students attempted. During the early part of the century schools moved towards increasing the usage of computers. Numerous stakeholders were unsure…

  10. EINSTEIN-HOME DISCOVERY OF 24 PULSARS IN THE PARKES MULTI-BEAM PULSAR SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knispel, B.; Kim, H.; Allen, B.

    2013-09-10

    We have conducted a new search for radio pulsars in compact binary systems in the Parkes multi-beam pulsar survey (PMPS) data, employing novel methods to remove the Doppler modulation from binary motion. This has yielded unparalleled sensitivity to pulsars in compact binaries. The required computation time of Almost-Equal-To 17, 000 CPU core years was provided by the distributed volunteer computing project Einstein-Home, which has a sustained computing power of about 1 PFlop s{sup -1}. We discovered 24 new pulsars in our search, 18 of which were isolated pulsars, and 6 were members of binary systems. Despite the wide filterbank channelsmore » and relatively slow sampling time of the PMPS data, we found pulsars with very large ratios of dispersion measure (DM) to spin period. Among those is PSR J1748-3009, the millisecond pulsar with the highest known DM ( Almost-Equal-To 420 pc cm{sup -3}). We also discovered PSR J1840-0643, which is in a binary system with an orbital period of 937 days, the fourth largest known. The new pulsar J1750-2536 likely belongs to the rare class of intermediate-mass binary pulsars. Three of the isolated pulsars show long-term nulling or intermittency in their emission, further increasing this growing family. Our discoveries demonstrate the value of distributed volunteer computing for data-driven astronomy and the importance of applying new analysis methods to extensively searched data.« less

  11. Near- and far-field aerodynamics in insect hovering flight: an integrated computational study.

    PubMed

    Aono, Hikaru; Liang, Fuyou; Liu, Hao

    2008-01-01

    We present the first integrative computational fluid dynamics (CFD) study of near- and far-field aerodynamics in insect hovering flight using a biology-inspired, dynamic flight simulator. This simulator, which has been built to encompass multiple mechanisms and principles related to insect flight, is capable of 'flying' an insect on the basis of realistic wing-body morphologies and kinematics. Our CFD study integrates near- and far-field wake dynamics and shows the detailed three-dimensional (3D) near- and far-field vortex flows: a horseshoe-shaped vortex is generated and wraps around the wing in the early down- and upstroke; subsequently, the horseshoe-shaped vortex grows into a doughnut-shaped vortex ring, with an intense jet-stream present in its core, forming the downwash; and eventually, the doughnut-shaped vortex rings of the wing pair break up into two circular vortex rings in the wake. The computed aerodynamic forces show reasonable agreement with experimental results in terms of both the mean force (vertical, horizontal and sideslip forces) and the time course over one stroke cycle (lift and drag forces). A large amount of lift force (approximately 62% of total lift force generated over a full wingbeat cycle) is generated during the upstroke, most likely due to the presence of intensive and stable, leading-edge vortices (LEVs) and wing tip vortices (TVs); and correspondingly, a much stronger downwash is observed compared to the downstroke. We also estimated hovering energetics based on the computed aerodynamic and inertial torques, and powers.

  12. Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing

    DTIC Science & Technology

    2006-11-01

    in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and

  13. Computer Power. Part 2: Electrical Power Problems and Their Amelioration.

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1989-01-01

    Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…

  14. National Energy with Weather System Simultator (NEWS) Sets Bounds on Cost Effective Wind and Solar PV Deployment in the USA without the Use of Storage.

    NASA Astrophysics Data System (ADS)

    Clack, C.; MacDonald, A. E.; Alexander, A.; Dunbar, A. D.; Xie, Y.; Wilczak, J. M.

    2014-12-01

    The importance of weather-driven renewable energies for the United States energy portfolio is growing. The main perceived problems with weather-driven renewable energies are their intermittent nature, low power density, and high costs. In 2009, we began a large-scale investigation into the characteristics of weather-driven renewables. The project utilized the best available weather data assimilation model to compute high spatial and temporal resolution power datasets for the renewable resources of wind and solar PV. The weather model used is the Rapid Update Cycle for the years of 2006-2008. The team also collated a detailed electrical load dataset for the contiguous USA from the Federal Energy Regulatory Commission for the same three-year period. The coincident time series of electrical load and weather data allows the possibility of temporally correlated computations for optimal design over large geographic areas. The past two years have seen the development of a cost optimization mathematic model that designs electric power systems. The model plans the system and dispatches it on an hourly timescale. The system is designed to be reliable, reduce carbon, reduce variability of renewable resources and move the electricity about the whole domain. The system built would create the infrastructure needed to reduce carbon emissions to 0 by 2050. The advantages of the system is reduced water demain, dual incomes for farmers, jobs for construction of the infrastructure, and price stability for energy. One important simplified test that was run included existing US carbon free power sources, natural gas power when needed, and a High Voltage Direct Current power transmission network. This study shows that the costs and carbon emissions from an optimally designed national system decrease with geographic size. It shows that with achievable estimates of wind and solar generation costs, that the US could decrease its carbon emissions by up to 80% by the early 2030s, without an increase in electric costs. The key requirement would be a 48 state network of HVDC transmission, creating a national market for electricity not possible in the current AC grid. The study also showed how the price of natural gas fuel influenced the optimal system designed.

  15. Computational Power of Symmetry-Protected Topological Phases.

    PubMed

    Stephen, David T; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert

    2017-07-07

    We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.

  16. Computational Power of Symmetry-Protected Topological Phases

    NASA Astrophysics Data System (ADS)

    Stephen, David T.; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert

    2017-07-01

    We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.

  17. Emulating a million machines to investigate botnets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rudish, Donald W.

    2010-06-01

    Researchers at Sandia National Laboratories in Livermore, California are creating what is in effect a vast digital petridish able to hold one million operating systems at once in an effort to study the behavior of rogue programs known as botnets. Botnets are used extensively by malicious computer hackers to steal computing power fron Internet-connected computers. The hackers harness the stolen resources into a scattered but powerful computer that can be used to send spam, execute phishing, scams or steal digital information. These remote-controlled 'distributed computers' are difficult to observe and track. Botnets may take over parts of tens of thousandsmore » or in some cases even millions of computers, making them among the world's most powerful computers for some applications.« less

  18. Simulation of Nonlinear Instabilities in an Attachment-Line Boundary Layer

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.

    1996-01-01

    The linear and the nonlinear stability of disturbances that propagate along the attachment line of a three-dimensional boundary layer is considered. The spatially evolving disturbances in the boundary layer are computed by direct numerical simulation (DNS) of the unsteady, incompressible Navier-Stokes equations. Disturbances are introduced either by forcing at the in ow or by applying suction and blowing at the wall. Quasi-parallel linear stability theory and a nonparallel theory yield notably different stability characteristics for disturbances near the critical Reynolds number; the DNS results con rm the latter theory. Previously, a weakly nonlinear theory and computations revealed a high wave-number region of subcritical disturbance growth. More recent computations have failed to achieve this subcritical growth. The present computational results indicate the presence of subcritically growing disturbances; the results support the weakly nonlinear theory. Furthermore, an explanation is provided for the previous theoretical and computational discrepancy. In addition, the present results demonstrate that steady suction can be used to stabilize disturbances that otherwise grow subcritically along the attachment line.

  19. Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing

    NASA Astrophysics Data System (ADS)

    Kim, Mooseop; Ryou, Jaecheol

    The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.

  20. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 2: Protocol specification

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1990-01-01

    This volume (2 of 4) contains the specification, structured flow charts, and code listing for the protocol. The purpose of an autonomous power system on a spacecraft is to relieve humans from having to continuously monitor and control the generation, storage, and distribution of power in the craft. This implies that algorithms will have been developed to monitor and control the power system. The power system will contain computers on which the algorithms run. There should be one control computer system that makes the high level decisions and sends commands to and receive data from the other distributed computers. This will require a communications network and an efficient protocol by which the computers will communicate. One of the major requirements on the protocol is that it be real time because of the need to control the power elements.

  1. Over-the-Counter Medicines: What's Right for You?

    MedlinePlus

    ... Contents: Advice for Americans about Self-Care: Access + Knowledge = Power OTC Know-How: It's on the Label Drug ... Tampering Advice for Americans about Self-Care: Access + Knowledge = Power American medicine cabinets contain a growing choice of ...

  2. Changing computing paradigms towards power efficiency

    PubMed Central

    Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro

    2014-01-01

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  3. Energy, A Crisis in Power.

    ERIC Educational Resources Information Center

    Holdren, John; Herrera, Philip

    The demand of Americans for more and more power, particularly electric power, contrasted by the deep and growing concern for the environment and a desire by private citizens to participate in the public decisions that affect the environment is the dilemma explored in this book. Part One by John Holdren, offers a scientist's overview of the energy…

  4. Global Communication, for the Powerful or the People? Media & Values 61.

    ERIC Educational Resources Information Center

    Silver, Rosalind, Ed.

    1993-01-01

    This issue of "Media & Values" explores the growing influence of mass media and how that influence is concentrated in the hands of a few powerful individuals or corporations. The essays present various interpretations of that influence and the implications for the world. Articles include: (1) "All Power to the Conglomerate"…

  5. Biodiversity, bioaccumulation and physiological changes in lichens growing in the vicinity of coal-based thermal power plant of Raebareli district, north India.

    PubMed

    Bajpai, Rajesh; Upreti, Dalip K; Nayaka, S; Kumari, B

    2010-02-15

    The lichen diversity assessment carried out around a coal-based thermal power plant indicated the increase in lichen abundance with the increase in distance from power plant in general. The photosynthetic pigments, protein and heavy metals were estimated in Pyxine cocoes (Sw.) Nyl., a common lichen growing around thermal power plant for further inference. Distributions of heavy metals from power plant showed positive correlation with distance for all directions, however western direction has received better dispersion as indicated by the concentration coefficient-R(2). Least significant difference analysis showed that speed of wind and its direction plays a major role in dispersion of heavy metals. Accumulation of Al, Cr, Fe, Pb and Zn in the thallus suppressed the concentrations of pigments like chlorophyll a, chlorophyll b and total chlorophyll, however, enhanced the level of protein. Further, the concentrations of chlorophyll contents in P. cocoes increased with the decreasing the distance from the power plant, while protein, carotenoid and phaeophytisation exhibited significant decrease.

  6. Pigs in cyberspace

    NASA Technical Reports Server (NTRS)

    Moravec, Hans

    1993-01-01

    Exploration and colonization of the universe awaits, but Earth-adapted biological humans are ill-equipped to respond to the challenge. Machines have gone farther and seen more, limited though they presently are by insect-like behavior inflexibility. As they become smarter over the coming decades, space will be theirs. Organizations of robots of ever increasing intelligence and sensory and motor ability will expand and transform what they occupy, working with matter, space and time. As they grow, a smaller and smaller fraction of their territory will be undeveloped frontier. Competitive success will depend more and more on using already available matter and space in ever more refined and useful forms. The process, analogous to the miniaturization that makes today's computers a trillion times more powerful than the mechanical calculators of the past, will gradually transform all activity from grossly physical homesteading of raw nature, to minimum-energy quantum transactions of computation. The final frontier will be urbanized, ultimately into an arena where every bit of activity is a meaningful computation: the inhabited portion of the universe will be transformed into a cyberspace. Because it will use resources more efficiently, a mature cyberspace of the distant future will be effectively much bigger than the present physical universe. While only an infinitesimal fraction of existing matter and space is doing interesting work, in a well developed cyberspace every bit will be part of a relevant computation or storing a useful datum. Over time, more compact and faster ways of using space and matter will be invented, and used to restructure the cyberspace, effectively increasing the amount of computational spacetime per unit of physical spacetime. Computational speed-ups will affect the subjective experience of entities in the cyberspace in a paradoxical way. At first glimpse, there is no subjective effect, because everything, inside and outside the individual, speeds up equally. But, more subtly, speed-up produces an expansion of the cyber universe, because, as thought accelerates, more subjective time passes during the fixed (probably lightspeed) physical transit time of a message between a given pair of locations - so those fixed locations seem to grow farther apart. Also, as information storage is made continually more efficient through both denser utilization of matter and more efficient encodings, there will be increasingly more cyber-stuff between any two points. The effect may somewhat resemble the continuous-creation process in the old steady-state theory of the physical universe of Hoyle, Bondi and Gold, where hydrogen atoms appear just fast enough throughout the expanding cosmos to maintain a constant density.

  7. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  8. The Quantitative Analysis of User Behavior Online - Data, Models and Algorithms

    NASA Astrophysics Data System (ADS)

    Raghavan, Prabhakar

    By blending principles from mechanism design, algorithms, machine learning and massive distributed computing, the search industry has become good at optimizing monetization on sound scientific principles. This represents a successful and growing partnership between computer science and microeconomics. When it comes to understanding how online users respond to the content and experiences presented to them, we have more of a lacuna in the collaboration between computer science and certain social sciences. We will use a concrete technical example from image search results presentation, developing in the process some algorithmic and machine learning problems of interest in their own right. We then use this example to motivate the kinds of studies that need to grow between computer science and the social sciences; a critical element of this is the need to blend large-scale data analysis with smaller-scale eye-tracking and "individualized" lab studies.

  9. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  10. Student Perceptions of Online Courses

    ERIC Educational Resources Information Center

    Jones, Irma S.; Blankenship, Dianna

    2017-01-01

    Presently, at the post-secondary level, digital or online education is offered in addition to traditional face-to-face courses and the number of online course offerings is rapidly growing. The "Occupational Outlook Handbook" reveals that employment in" computer and information technology" occupations is projected to grow 12…

  11. The Case for Improving U.S. Computer Science Education

    ERIC Educational Resources Information Center

    Nager, Adams; Atkinson, Robert

    2016-01-01

    Despite the growing use of computers and software in every facet of our economy, not until recently has computer science education begun to gain traction in American school systems. The current focus on improving science, technology, engineering, and mathematics (STEM) education in the U.S. School system has disregarded differences within STEM…

  12. Learning Consequences of Mobile-Computing Technologies: Differential Impacts on Integrative Learning and Skill-Focused Learning

    ERIC Educational Resources Information Center

    Kumi, Richard; Reychav, Iris; Sabherwal, Rajiv

    2016-01-01

    Many educational institutions are integrating mobile-computing technologies (MCT) into the classroom to improve learning outcomes. There is also a growing interest in research to understand how MCT influence learning outcomes. The diversity of results in prior research indicates that computer-mediated learning has different effects on various…

  13. The Impact of Three-Dimensional Computational Modeling on Student Understanding of Astronomical Concepts: A Quantitative Analysis

    ERIC Educational Resources Information Center

    Hansen, John; Barnett, Michael; MaKinster, James; Keating, Thomas

    2004-01-01

    The increased availability of computational modeling software has created opportunities for students to engage in scientific inquiry through constructing computer-based models of scientific phenomena. However, despite the growing trend of integrating technology into science curricula, educators need to understand what aspects of these technologies…

  14. Assured Access/Mobile Computing Initiatives on Five University Campuses.

    ERIC Educational Resources Information Center

    Blurton, Craig; Chee, Yam San; Long, Phillip D.; Resmer, Mark; Runde, Craig

    Mobile computing and assured access are becoming popular terms to describe a growing number of university programs which take advantage of ubiquitous network access points and the portability of notebook computers to ensure all students have access to digital tools and resources. However, the implementation of such programs varies widely from…

  15. Software Solution Saves Dollars

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2004-01-01

    This article discusses computer software that can give classrooms and computer labs the capabilities of costly PC's at a small fraction of the cost. A growing number of cost-conscious school districts are finding budget relief in low-cost computer software known as "open source" that can do everything from manage school Web sites to equip…

  16. The "Magic" of Wireless Access in the Library

    ERIC Educational Resources Information Center

    Balas, Janet L.

    2006-01-01

    It seems that the demand for public access computers grows exponentially every time a library network is expanded, making it impossible to ever have enough computers available for patrons. One solution that many libraries are implementing to ease the demand for public computer use is to offer wireless technology that allows patrons to bring in…

  17. Outline of the Course in Automated Language Processing.

    ERIC Educational Resources Information Center

    Pacak, M.; Roberts, A. Hood

    The course in computational linguistics described in this paper was given at The American University during the spring semester of 1969. The purpose of the course was "to convey to students with no previous experience an appreciation of the growing art of computational linguistics which encompasses every use to which computers can be put in…

  18. Prospective Teachers' Views on the Use of Calculators with Computer Algebra System in Algebra Instruction

    ERIC Educational Resources Information Center

    Ozgun-Koca, S. Ash

    2010-01-01

    Although growing numbers of secondary school mathematics teachers and students use calculators to study graphs, they mainly rely on paper-and-pencil when manipulating algebraic symbols. However, the Computer Algebra Systems (CAS) on computers or handheld calculators create new possibilities for teaching and learning algebraic manipulation. This…

  19. Teach Graphic Design Basics with PowerPoint

    ERIC Educational Resources Information Center

    Lazaros, Edward J.; Spotts, Thomas H.

    2007-01-01

    While PowerPoint is generally regarded as simply software for creating slide presentations, it includes often overlooked--but powerful--drawing tools. Because it is part of the Microsoft Office package, PowerPoint comes preloaded on many computers and thus is already available in many classrooms. Since most computers are not preloaded with good…

  20. Modeling and Analysis of Power Processing Systems. [use of a digital computer for designing power plants

    NASA Technical Reports Server (NTRS)

    Fegley, K. A.; Hayden, J. H.; Rehmann, D. W.

    1974-01-01

    The feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems is investigated. It is shown that a digital computer may be used in an interactive mode for the design, modeling, analysis, and comparison of power processing systems.

  1. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodge, D. A.; Harris, D. B.

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  2. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE PAGES

    Dodge, D. A.; Harris, D. B.

    2016-03-15

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  3. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations

    PubMed Central

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W.

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures. PMID:26904094

  4. Closed Loop Experiment Manager (CLEM)-An Open and Inexpensive Solution for Multichannel Electrophysiological Recordings and Closed Loop Experiments.

    PubMed

    Hazan, Hananel; Ziv, Noam E

    2017-01-01

    There is growing need for multichannel electrophysiological systems that record from and interact with neuronal systems in near real-time. Such systems are needed, for example, for closed loop, multichannel electrophysiological/optogenetic experimentation in vivo and in a variety of other neuronal preparations, or for developing and testing neuro-prosthetic devices, to name a few. Furthermore, there is a need for such systems to be inexpensive, reliable, user friendly, easy to set-up, open and expandable, and possess long life cycles in face of rapidly changing computing environments. Finally, they should provide powerful, yet reasonably easy to implement facilities for developing closed-loop protocols for interacting with neuronal systems. Here, we survey commercial and open source systems that address these needs to varying degrees. We then present our own solution, which we refer to as Closed Loop Experiments Manager (CLEM). CLEM is an open source, soft real-time, Microsoft Windows desktop application that is based on a single generic personal computer (PC) and an inexpensive, general-purpose data acquisition board. CLEM provides a fully functional, user-friendly graphical interface, possesses facilities for recording, presenting and logging electrophysiological data from up to 64 analog channels, and facilities for controlling external devices, such as stimulators, through digital and analog interfaces. Importantly, it includes facilities for running closed-loop protocols written in any programming language that can generate dynamic link libraries (DLLs). We describe the application, its architecture and facilities. We then demonstrate, using networks of cortical neurons growing on multielectrode arrays (MEA) that despite its reliance on generic hardware, its performance is appropriate for flexible, closed-loop experimentation at the neuronal network level.

  5. Multi-scale lung modeling.

    PubMed

    Tawhai, Merryn H; Bates, Jason H T

    2011-05-01

    Multi-scale modeling of biological systems has recently become fashionable due to the growing power of digital computers as well as to the growing realization that integrative systems behavior is as important to life as is the genome. While it is true that the behavior of a living organism must ultimately be traceable to all its components and their myriad interactions, attempting to codify this in its entirety in a model misses the insights gained from understanding how collections of system components at one level of scale conspire to produce qualitatively different behavior at higher levels. The essence of multi-scale modeling thus lies not in the inclusion of every conceivable biological detail, but rather in the judicious selection of emergent phenomena appropriate to the level of scale being modeled. These principles are exemplified in recent computational models of the lung. Airways responsiveness, for example, is an organ-level manifestation of events that begin at the molecular level within airway smooth muscle cells, yet it is not necessary to invoke all these molecular events to accurately describe the contraction dynamics of a cell, nor is it necessary to invoke all phenomena observable at the level of the cell to account for the changes in overall lung function that occur following methacholine challenge. Similarly, the regulation of pulmonary vascular tone has complex origins within the individual smooth muscle cells that line the blood vessels but, again, many of the fine details of cell behavior average out at the level of the organ to produce an effect on pulmonary vascular pressure that can be described in much simpler terms. The art of multi-scale lung modeling thus reduces not to being limitlessly inclusive, but rather to knowing what biological details to leave out.

  6. Closed Loop Experiment Manager (CLEM)—An Open and Inexpensive Solution for Multichannel Electrophysiological Recordings and Closed Loop Experiments

    PubMed Central

    Hazan, Hananel; Ziv, Noam E.

    2017-01-01

    There is growing need for multichannel electrophysiological systems that record from and interact with neuronal systems in near real-time. Such systems are needed, for example, for closed loop, multichannel electrophysiological/optogenetic experimentation in vivo and in a variety of other neuronal preparations, or for developing and testing neuro-prosthetic devices, to name a few. Furthermore, there is a need for such systems to be inexpensive, reliable, user friendly, easy to set-up, open and expandable, and possess long life cycles in face of rapidly changing computing environments. Finally, they should provide powerful, yet reasonably easy to implement facilities for developing closed-loop protocols for interacting with neuronal systems. Here, we survey commercial and open source systems that address these needs to varying degrees. We then present our own solution, which we refer to as Closed Loop Experiments Manager (CLEM). CLEM is an open source, soft real-time, Microsoft Windows desktop application that is based on a single generic personal computer (PC) and an inexpensive, general-purpose data acquisition board. CLEM provides a fully functional, user-friendly graphical interface, possesses facilities for recording, presenting and logging electrophysiological data from up to 64 analog channels, and facilities for controlling external devices, such as stimulators, through digital and analog interfaces. Importantly, it includes facilities for running closed-loop protocols written in any programming language that can generate dynamic link libraries (DLLs). We describe the application, its architecture and facilities. We then demonstrate, using networks of cortical neurons growing on multielectrode arrays (MEA) that despite its reliance on generic hardware, its performance is appropriate for flexible, closed-loop experimentation at the neuronal network level. PMID:29093659

  7. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations.

    PubMed

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures.

  8. Cross-Mode Comparability of Computer-Based Testing (CBT) versus Paper-Pencil Based Testing (PPT): An Investigation of Testing Administration Mode among Iranian Intermediate EFL Learners

    ERIC Educational Resources Information Center

    Khoshsima, Hooshang; Hosseini, Monirosadat; Toroujeni, Seyyed Morteza Hashemi

    2017-01-01

    Advent of technology has caused growing interest in using computers to convert conventional paper and pencil-based testing (Henceforth PPT) into Computer-based testing (Henceforth CBT) in the field of education during last decades. This constant promulgation of computers to reshape the conventional tests into computerized format permeated the…

  9. A Computer Model for Red Blood Cell Chemistry

    DTIC Science & Technology

    1996-10-01

    5012. 13. ABSTRACT (Maximum 200 There is a growing need for interactive computational tools for medical education and research. The most exciting...paradigm for interactive education is simulation. Fluid Mod is a simulation based computational tool developed in the late sixties and early seventies at...to a modern Windows, object oriented interface. This development will provide students with a useful computational tool for learning . More important

  10. Prediction of a Francis turbine prototype full load instability from investigations on the reduced scale model

    NASA Astrophysics Data System (ADS)

    Alligné, S.; Maruzewski, P.; Dinh, T.; Wang, B.; Fedorov, A.; Iosfin, J.; Avellan, F.

    2010-08-01

    The growing development of renewable energies combined with the process of privatization, lead to a change of economical energy market strategies. Instantaneous pricings of electricity as a function of demand or predictions, induces profitable peak productions which are mainly covered by hydroelectric power plants. Therefore, operators harness more hydroelectric facilities at full load operating conditions. However, the Francis Turbine features an axi-symmetric rope leaving the runner which may act under certain conditions as an internal energy source leading to instability. Undesired power and pressure fluctuations are induced which may limit the maximum available power output. BC Hydro experiences such constraints in a hydroelectric power plant consisting of four 435 MW Francis Turbine generating units, which is located in Canada's province of British Columbia. Under specific full load operating conditions, one unit experiences power and pressure fluctuations at 0.46 Hz. The aim of the paper is to present a methodology allowing prediction of this prototype's instability frequency from investigations on the reduced scale model. A new hydro acoustic vortex rope model has been developed in SIMSEN software, taking into account the energy dissipation due to the thermodynamic exchange between the gas and the surrounding liquid. A combination of measurements, CFD simulations and computation of eigenmodes of the reduced scale model installed on test rig, allows the accurate calibration of the vortex rope model parameters at the model scale. Then, transposition of parameters to the prototype according to similitude laws is applied and stability analysis of the power plant is performed. The eigenfrequency of 0.39 Hz related to the first eigenmode of the power plant is determined to be unstable. Predicted frequency of the full load power and pressure fluctuations at the unit unstable operating point is found to be in general agreement with the prototype measurements.

  11. [The unbroken power of psychiatry as seen through the eyes of Michel Foucault].

    PubMed

    Kraan, H F

    2006-01-01

    Is the psychiatrist still a powerfulforce in society? Foucault, a 'historical philosopher' concerned with power relations, would have answered this question in the affirmative. Possibly, however, the psychiatrist's sovereign power is weaker than it was a century ago because some of the psychiatrist's tasks have been re-allocated. Some have been assigned to the growing number of specialist groups in the mental health service, others have been put in the hands of 'health managers' who form part of our country's growing bureaucracy and put a financial and economic burden on our health service. Nevertheless, the procedural power of psychiatrist has not been weakened; psychiatrists are able to deprive patients of their freedom, pronounce them unfit for work and reduce punishments and sentences for serious crime on the grounds of diminished responsibility. This procedural power is accentuated by the increasing influence of psychology in society. The power of psychiatric knowledge has shifted from an archaic to a demonstrative discourse about truth which is rooted in evidence-based medicine and which enhances the power of psychiatrists still further. This may also mean that the 19th century concept of hysteria is perpetuated in psychiatric practice in all kinds of modern clinical forms.

  12. 2011 Computation Directorate Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, D L

    2012-04-11

    From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilitiesmore » and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.« less

  13. Heterotic computing: exploiting hybrid computational devices.

    PubMed

    Kendon, Viv; Sebald, Angelika; Stepney, Susan

    2015-07-28

    Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  14. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.

  15. Expression Templates for Truncated Power Series

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Shasharina, Svetlana G.

    1997-05-01

    Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.

  16. Rapidly growing glandular papilloma associated with mucus production: a case report

    PubMed Central

    2014-01-01

    Background Pulmonary glandular papillomas are rare neoplasms, and their very slow or absent growth over time generally facilitates establishing the diagnosis. Case presentation In an 84-year-old woman who underwent surgery for sigmoid colon cancer, a growing solitary pulmonary nodule was identified on postoperative follow-up computed tomography. A computer tomography-guided needle biopsy was performed under suspicion that the nodule was malignant. The histopathological findings suggested a glandular papilloma. Right basilar segmentectomy was carried out, and the lesion was completely resected. Postoperative histopathological examination revealed a benign glandular papilloma accompanied by mucus retention in the surrounding alveolar region. Conclusions A malignant neoplasm is usually suspected when a pulmonary tumor shows rapid growth. However, glandular papillomas associated with mucus retention also tend to grow in some cases, and should be included in the differential diagnosis. PMID:24886616

  17. Systems and methods for rapid processing and storage of data

    DOEpatents

    Stalzer, Mark A.

    2017-01-24

    Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.

  18. A dc model for power switching transistors suitable for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Wilson, P. M.; George, R. T., Jr.; Owen, H. A., Jr.; Wilson, T. G.

    1979-01-01

    The proposed dc model for bipolar junction power switching transistors is based on measurements which may be made with standard laboratory equipment. Those nonlinearities which are of importance to power electronics design are emphasized. Measurements procedures are discussed in detail. A model formulation adapted for use with a computer program is presented, and a comparison between actual and computer-generated results is made.

  19. Mobile object retrieval in server-based image databases

    NASA Astrophysics Data System (ADS)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  20. Attractor-Based Obstructions to Growth in Homogeneous Cyclic Boolean Automata.

    PubMed

    Khan, Bilal; Cantor, Yuri; Dombrowski, Kirk

    2015-11-01

    We consider a synchronous Boolean organism consisting of N cells arranged in a circle, where each cell initially takes on an independently chosen Boolean value. During the lifetime of the organism, each cell updates its own value by responding to the presence (or absence) of diversity amongst its two neighbours' values. We show that if all cells eventually take a value of 0 (irrespective of their initial values) then the organism necessarily has a cell count that is a power of 2. In addition, the converse is also proved: if the number of cells in the organism is a proper power of 2, then no matter what the initial values of the cells are, eventually all cells take on a value of 0 and then cease to change further. We argue that such an absence of structure in the dynamical properties of the organism implies a lack of adaptiveness, and so is evolutionarily disadvantageous. It follows that as the organism doubles in size (say from m to 2m) it will necessarily encounter an intermediate size that is a proper power of 2, and suffers from low adaptiveness. Finally we show, through computational experiments, that one way an organism can grow to more than twice its size and still avoid passing through intermediate sizes that lack structural dynamics, is for the organism to depart from assumptions of homogeneity at the cellular level.

  1. Attractor-Based Obstructions to Growth in Homogeneous Cyclic Boolean Automata

    PubMed Central

    Khan, Bilal; Cantor, Yuri; Dombrowski, Kirk

    2016-01-01

    We consider a synchronous Boolean organism consisting of N cells arranged in a circle, where each cell initially takes on an independently chosen Boolean value. During the lifetime of the organism, each cell updates its own value by responding to the presence (or absence) of diversity amongst its two neighbours’ values. We show that if all cells eventually take a value of 0 (irrespective of their initial values) then the organism necessarily has a cell count that is a power of 2. In addition, the converse is also proved: if the number of cells in the organism is a proper power of 2, then no matter what the initial values of the cells are, eventually all cells take on a value of 0 and then cease to change further. We argue that such an absence of structure in the dynamical properties of the organism implies a lack of adaptiveness, and so is evolutionarily disadvantageous. It follows that as the organism doubles in size (say from m to 2m) it will necessarily encounter an intermediate size that is a proper power of 2, and suffers from low adaptiveness. Finally we show, through computational experiments, that one way an organism can grow to more than twice its size and still avoid passing through intermediate sizes that lack structural dynamics, is for the organism to depart from assumptions of homogeneity at the cellular level. PMID:27660398

  2. Changing computing paradigms towards power efficiency.

    PubMed

    Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro

    2014-06-28

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  3. Web-based services for drug design and discovery.

    PubMed

    Frey, Jeremy G; Bird, Colin L

    2011-09-01

    Reviews of the development of drug discovery through the 20(th) century recognised the importance of chemistry and increasingly bioinformatics, but had relatively little to say about the importance of computing and networked computing in particular. However, the design and discovery of new drugs is arguably the most significant single application of bioinformatics and cheminformatics to have benefitted from the increases in the range and power of the computational techniques since the emergence of the World Wide Web, commonly now referred to as simply 'the Web'. Web services have enabled researchers to access shared resources and to deploy standardized calculations in their search for new drugs. This article first considers the fundamental principles of Web services and workflows, and then explores the facilities and resources that have evolved to meet the specific needs of chem- and bio-informatics. This strategy leads to a more detailed examination of the basic components that characterise molecules and the essential predictive techniques, followed by a discussion of the emerging networked services that transcend the basic provisions, and the growing trend towards embracing modern techniques, in particular the Semantic Web. In the opinion of the authors, the issues that require community action are: increasing the amount of chemical data available for open access; validating the data as provided; and developing more efficient links between the worlds of cheminformatics and bioinformatics. The goal is to create ever better drug design services.

  4. Parallelized seeded region growing using CUDA.

    PubMed

    Park, Seongjin; Lee, Jeongjin; Lee, Hyunna; Shin, Juneseuk; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests.

  5. Cardiorespiratory dynamic response to mental stress: a multivariate time-frequency analysis.

    PubMed

    Widjaja, Devy; Orini, Michele; Vlemincx, Elke; Van Huffel, Sabine

    2013-01-01

    Mental stress is a growing problem in our society. In order to deal with this, it is important to understand the underlying stress mechanisms. In this study, we aim to determine how the cardiorespiratory interactions are affected by mental arithmetic stress and attention. We conduct cross time-frequency (TF) analyses to assess the cardiorespiratory coupling. In addition, we introduce partial TF spectra to separate variations in the RR interval series that are linearly related to respiration from RR interval variations (RRV) that are not related to respiration. The performance of partial spectra is evaluated in two simulation studies. Time-varying parameters, such as instantaneous powers and frequencies, are derived from the computed spectra. Statistical analysis is carried out continuously in time to evaluate the dynamic response to mental stress and attention. The results show an increased heart and respiratory rate during stress and attention, compared to a resting condition. Also a fast reduction in vagal activity is noted. The partial TF analysis reveals a faster reduction of RRV power related to (3 s) than unrelated to (30 s) respiration, demonstrating that the autonomic response to mental stress is driven by mechanisms characterized by different temporal scales.

  6. Modeling of microstructure evolution in direct metal laser sintering: A phase field approach

    NASA Astrophysics Data System (ADS)

    Nandy, Jyotirmoy; Sarangi, Hrushikesh; Sahoo, Seshadev

    2017-02-01

    Direct Metal Laser Sintering (DMLS) is a new technology in the field of additive manufacturing, which builds metal parts in a layer by layer fashion directly from the powder bed. The process occurs within a very short time period with rapid solidification rate. Slight variations in the process parameters may cause enormous change in the final build parts. The physical and mechanical properties of the final build parts are dependent on the solidification rate which directly affects the microstructure of the material. Thus, the evolving of microstructure plays a vital role in the process parameters optimization. Nowadays, the increase in computational power allows for direct simulations of microstructures during materials processing for specific manufacturing conditions. In this study, modeling of microstructure evolution of Al-Si-10Mg powder in DMLS process was carried out by using a phase field approach. A MATLAB code was developed to solve the set of phase field equations, where simulation parameters include temperature gradient, laser scan speed and laser power. The effects of temperature gradient on microstructure evolution were studied and found that with increase in temperature gradient, the dendritic tip grows at a faster rate.

  7. Techniques and implementation of the embedded rule-based expert system using Ada

    NASA Technical Reports Server (NTRS)

    Liberman, Eugene M.; Jones, Robert E.

    1991-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with its portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assured a growing role in providing human-like reasoning capability and expertise for computer systems. The integration of expert system technology with Ada programming language, specifically a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell is discussed. The NASA Lewis Research Center was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-base power expert system, in ART-Ada. Three components, the rule-based expert system, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  8. Electricity Capacity Expansion Modeling, Analysis, and Visualization: A Summary of High-Renewable Modeling Experiences (Chinese Translation) (in Chinese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blair, Nate; Zhou, Ella; Getman, Dan

    2015-10-01

    This is the Chinese translation of NREL/TP-6A20-64831. Mathematical and computational models are widely used for the analysis and design of both physical and financial systems. Modeling the electric grid is of particular importance to China for three reasons. First, power-sector assets are expensive and long-lived, and they are critical to any country's development. China's electric load, transmission, and other energy-related infrastructure are expected to continue to grow rapidly; therefore it is crucial to understand and help plan for the future in which those assets will operate. Second, China has dramatically increased its deployment of renewable energy (RE), and is likelymore » to continue further accelerating such deployment over the coming decades. Careful planning and assessment of the various aspects (technical, economic, social, and political) of integrating a large amount of renewables on the grid is required. Third, companies need the tools to develop a strategy for their own involvement in the power market China is now developing, and to enable a possible transition to an efficient and high RE future.« less

  9. Wind Energy Applications for Municipal Water Services: Opportunities, Situation Analyses, and Case Studies; Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flowers, L.; Miner-Nordstrom, L.

    2006-01-01

    As communities grow, greater demands are placed on water supplies, wastewater services, and the electricity needed to power the growing water services infrastructure. Water is also a critical resource for thermoelectric power plants. Future population growth in the United States is therefore expected to heighten competition for water resources. Many parts of the United States with increasing water stresses also have significant wind energy resources. Wind power is the fastest-growing electric generation source in the United States and is decreasing in cost to be competitive with thermoelectric generation. Wind energy can offer communities in water-stressed areas the option of economicallymore » meeting increasing energy needs without increasing demands on valuable water resources. Wind energy can also provide targeted energy production to serve critical local water-system needs. The research presented in this report describes a systematic assessment of the potential for wind power to support water utility operation, with the objective to identify promising technical applications and water utility case study opportunities. The first section describes the current situation that municipal providers face with respect to energy and water. The second section describes the progress that wind technologies have made in recent years to become a cost-effective electricity source. The third section describes the analysis employed to assess potential for wind power in support of water service providers, as well as two case studies. The report concludes with results and recommendations.« less

  10. Characterizing Topology of Probabilistic Biological Networks.

    PubMed

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-09-06

    Biological interactions are often uncertain events, that may or may not take place with some probability. Existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. Here, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. We develop a method that accurately describes the degree distribution of such networks. We also extend our method to accurately compute the joint degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. It also helps us find an adequate mathematical model using maximum likelihood estimation. Our results demonstrate that power law and log-normal models best describe degree distributions for probabilistic networks. The inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected.

  11. Context-aware system design

    NASA Astrophysics Data System (ADS)

    Chan, Christine S.; Ostertag, Michael H.; Akyürek, Alper Sinan; Šimunić Rosing, Tajana

    2017-05-01

    The Internet of Things envisions a web-connected infrastructure of billions of sensors and actuation devices. However, the current state-of-the-art presents another reality: monolithic end-to-end applications tightly coupled to a limited set of sensors and actuators. Growing such applications with new devices or behaviors, or extending the existing infrastructure with new applications, involves redesign and redeployment. We instead propose a modular approach to these applications, breaking them into an equivalent set of functional units (context engines) whose input/output transformations are driven by general-purpose machine learning, demonstrating an improvement in compute redundancy and computational complexity with minimal impact on accuracy. In conjunction with formal data specifications, or ontologies, we can replace application-specific implementations with a composition of context engines that use common statistical learning to generate output, thus improving context reuse. We implement interconnected context-aware applications using our approach, extracting user context from sensors in both healthcare and grid applications. We compare our infrastructure to single-stage monolithic implementations with single-point communications between sensor nodes and the cloud servers, demonstrating a reduction in combined system energy by 22-45%, and multiplying the battery lifetime of power-constrained devices by at least 22x, with easy deployment across different architectures and devices.

  12. TRIO: Burst Buffer Based I/O Orchestration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Oral, H Sarp; Pritchard, Michael

    The growing computing power on leadership HPC systems is often accompanied by ever-escalating failure rates. Checkpointing is a common defensive mechanism used by scientific applications for failure recovery. However, directly writing the large and bursty checkpointing dataset to parallel filesystem can incur significant I/O contention on storage servers. Such contention in turn degrades the raw bandwidth utilization of storage servers and prolongs the average job I/O time of concurrent applications. Recently burst buffer has been proposed as an intermediate layer to absorb the bursty I/O traffic from compute nodes to storage backend. But an I/O orchestration mechanism is still desiredmore » to efficiently move checkpointing data from bursty buffers to storage backend. In this paper, we propose a burst buffer based I/O orchestration framework, named TRIO, to intercept and reshape the bursty writes for better sequential write traffic to storage severs. Meanwhile, TRIO coordinates the flushing orders among concurrent burst buffers to alleviate the contention on storage server bandwidth. Our experimental results reveal that TRIO can deliver 30.5% higher bandwidth and reduce the average job I/O time by 37% on average for data-intensive applications in various checkpointing scenarios.« less

  13. Improving Memory Error Handling Using Linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducingmore » both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.« less

  14. Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment

    ERIC Educational Resources Information Center

    Lin, Jing-Wen

    2016-01-01

    This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…

  15. Virtual Observatory and Distributed Data Mining

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.

    2012-03-01

    New modes of discovery are enabled by the growth of data and computational resources (i.e., cyberinfrastructure) in the sciences. This cyberinfrastructure includes structured databases, virtual observatories (distributed data, as described in Section 20.2.1 of this chapter), high-performance computing (petascale machines), distributed computing (e.g., the Grid, the Cloud, and peer-to-peer networks), intelligent search and discovery tools, and innovative visualization environments. Data streams from experiments, sensors, and simulations are increasingly complex and growing in volume. This is true in most sciences, including astronomy, climate simulations, Earth observing systems, remote sensing data collections, and sensor networks. At the same time, we see an emerging confluence of new technologies and approaches to science, most clearly visible in the growing synergism of the four modes of scientific discovery: sensors-modeling-computing-data (Eastman et al. 2005). This has been driven by numerous developments, including the information explosion, development of large-array sensors, acceleration in high-performance computing (HPC) power, advances in algorithms, and efficient modeling techniques. Among these, the most extreme is the growth in new data. Specifically, the acquisition of data in all scientific disciplines is rapidly accelerating and causing a data glut (Bell et al. 2007). It has been estimated that data volumes double every year—for example, the NCSA (National Center for Supercomputing Applications) reported that their users cumulatively generated one petabyte of data over the first 19 years of NCSA operation, but they then generated their next one petabyte in the next year alone, and the data production has been growing by almost 100% each year after that (Butler 2008). The NCSA example is just one of many demonstrations of the exponential (annual data-doubling) growth in scientific data collections. In general, this putative data-doubling is an inevitable result of several compounding factors: the proliferation of data-generating devices, sensors, projects, and enterprises; the 18-month doubling of the digital capacity of these microprocessor-based sensors and devices (commonly referred to as "Moore’s law"); the move to digital for nearly all forms of information; the increase in human-generated data (both unstructured information on the web and structured data from experiments, models, and simulation); and the ever-expanding capability of higher density media to hold greater volumes of data (i.e., data production expands to fill the available storage space). These factors are consequently producing an exponential data growth rate, which will soon (if not already) become an insurmountable technical challenge even with the great advances in computation and algorithms. This technical challenge is compounded by the ever-increasing geographic dispersion of important data sources—the data collections are not stored uniformly at a single location, or with a single data model, or in uniform formats and modalities (e.g., images, databases, structured and unstructured files, and XML data sets)—the data are in fact large, distributed, heterogeneous, and complex. The greatest scientific research challenge with these massive distributed data collections is consequently extracting all of the rich information and knowledge content contained therein, thus requiring new approaches to scientific research. This emerging data-intensive and data-oriented approach to scientific research is sometimes called discovery informatics or X-informatics (where X can be any science, such as bio, geo, astro, chem, eco, or anything; Agresti 2003; Gray 2003; Borne 2010). This data-oriented approach to science is now recognized by some (e.g., Mahootian and Eastman 2009; Hey et al. 2009) as the fourth paradigm of research, following (historically) experiment/observation, modeling/analysis, and computational science.

  16. 7 CFR 1710.302 - Financial forecasts-power supply borrowers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... facilities; (3) Provide an in-depth analysis of the regional markets for power if loan feasibility depends to any degree on a borrower's ability to sell surplus power while its system loads grow to meet the... sensitivity analysis if required by RUS pursuant to § 1710.300(d)(5). (e) The projections shall be coordinated...

  17. Cloud Computing Security Issue: Survey

    NASA Astrophysics Data System (ADS)

    Kamal, Shailza; Kaur, Rajpreet

    2011-12-01

    Cloud computing is the growing field in IT industry since 2007 proposed by IBM. Another company like Google, Amazon, and Microsoft provides further products to cloud computing. The cloud computing is the internet based computing that shared recourses, information on demand. It provides the services like SaaS, IaaS and PaaS. The services and recourses are shared by virtualization that run multiple operation applications on cloud computing. This discussion gives the survey on the challenges on security issues during cloud computing and describes some standards and protocols that presents how security can be managed.

  18. Artificial Intelligence and Machine Learning in Radiology: Opportunities, Challenges, Pitfalls, and Criteria for Success.

    PubMed

    Thrall, James H; Li, Xiang; Li, Quanzheng; Cruz, Cinthia; Do, Synho; Dreyer, Keith; Brink, James

    2018-03-01

    Worldwide interest in artificial intelligence (AI) applications, including imaging, is high and growing rapidly, fueled by availability of large datasets ("big data"), substantial advances in computing power, and new deep-learning algorithms. Apart from developing new AI methods per se, there are many opportunities and challenges for the imaging community, including the development of a common nomenclature, better ways to share image data, and standards for validating AI program use across different imaging platforms and patient populations. AI surveillance programs may help radiologists prioritize work lists by identifying suspicious or positive cases for early review. AI programs can be used to extract "radiomic" information from images not discernible by visual inspection, potentially increasing the diagnostic and prognostic value derived from image datasets. Predictions have been made that suggest AI will put radiologists out of business. This issue has been overstated, and it is much more likely that radiologists will beneficially incorporate AI methods into their practices. Current limitations in availability of technical expertise and even computing power will be resolved over time and can also be addressed by remote access solutions. Success for AI in imaging will be measured by value created: increased diagnostic certainty, faster turnaround, better outcomes for patients, and better quality of work life for radiologists. AI offers a new and promising set of methods for analyzing image data. Radiologists will explore these new pathways and are likely to play a leading role in medical applications of AI. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  19. Pedagogy Matters: Engaging Diverse Students as Community Researchers in Three Computer Science Classrooms

    ERIC Educational Resources Information Center

    Ryoo, Jean Jinsun

    2013-01-01

    Computing occupations are among the fastest growing in the U.S. and technological innovations are central to solving world problems. Yet only our most privileged students are learning to use technology for creative purposes through rigorous computer science education opportunities. In order to increase access for diverse students and females who…

  20. A Literature Review of Computers and Pedagogy for Journalism and Mass Communication Education.

    ERIC Educational Resources Information Center

    Hoag, Anne M.; Bhattacharya, Sandhya; Helsel, Jeffrey; Hu, Yifeng; Lee, Sangki; Kim, Jinhee; Kim, Sunghae; Michael, Patty Wharton; Park, Chongdae; Sager, Sheila S.; Seo, Sangho; Stark, Craig; Yeo, Benjamin

    2003-01-01

    Notes that a growing body of scholarship on computers and pedagogy encompasses a broad range of topics. Focuses on research judged to have implications within journalism and mass communication education. Discusses literature which considers computer use in course design and teaching, student attributes in a digital learning context, the role of…

  1. Hiding in Plain Sight: Identifying Computational Thinking in the Ontario Elementary School Curriculum

    ERIC Educational Resources Information Center

    Hennessey, Eden J. V.; Mueller, Julie; Beckett, Danielle; Fisher, Peter A.

    2017-01-01

    Given a growing digital economy with complex problems, demands are being made for education to address computational thinking (CT)--an approach to problem solving that draws on the tenets of computer science. We conducted a comprehensive content analysis of the Ontario elementary school curriculum documents for 44 CT-related terms to examine the…

  2. An Undergraduate Computer Engineering Option for Electrical Engineering.

    ERIC Educational Resources Information Center

    National Academy of Engineering, Washington, DC. Commission on Education.

    This report is the result of a study, funded by the National Science Foundation, of a group constituted as the COSINE Task Force on Undergraduate Education in Computer Engineering in 1969. The group was formed in response to the growing demand for education in computer engineering and the limited opportunities for study in this area. Computer…

  3. Mind and Material: The Interplay between Computer-Related and Second Language Factors in Online Communication Dialogues

    ERIC Educational Resources Information Center

    Wu, Pin-hsiang Natalie; Kawamura, Michelle

    2014-01-01

    With a growing demand for learning English and a trend of utilizing computers in education, methods that can achieve the effectiveness of computer-mediated communication (CMC) to support language learning in higher education have been examined. However, second language factors manipulate both the process and production of CMC and, therefore,…

  4. Studying Computer Science in a Multidisciplinary Degree Programme: Freshman Students' Orientation, Knowledge, and Background

    ERIC Educational Resources Information Center

    Kautz, Karlheinz; Kofoed, Uffe

    2004-01-01

    Teachers at universities are facing an increasing disparity in students' prior IT knowledge and, at the same time, experience a growing disengagement of the students with regard to involvement in study activities. As computer science teachers in a joint programme in computer science and business administration, we made a number of similar…

  5. Educational Outcomes and Research from 1:1 Computing Settings

    ERIC Educational Resources Information Center

    Bebell, Damian; O'Dwyer, Laura M.

    2010-01-01

    Despite the growing interest in 1:1 computing initiatives, relatively little empirical research has focused on the outcomes of these investments. The current special edition of the Journal of Technology and Assessment presents four empirical studies of K-12 1:1 computing programs and one review of key themes in the conversation about 1:1 computing…

  6. "PowerPoint" Is Not Just for Business Presentations and College Lectures: Using "PowerPoint" to Enhance Instruction for Students with Disabilities

    ERIC Educational Resources Information Center

    Coleman, Mari Beth

    2009-01-01

    Microsoft PowerPoint software is widely used in business and higher education and is growing in use with school-aged students. A small body of research has demonstrated that it can be effective in enhancing skill instruction for individuals with disabilities. PowerPoint is not a difficult program to learn, but it provides endless possibilities for…

  7. DET/MPS - The GSFC Energy Balance Programs

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1994-01-01

    Direct Energy Transfer (DET) and MultiMission Spacecraft Modular Power System (MPS) computer programs perform mathematical modeling and simulation to aid in design and analysis of DET and MPS spacecraft power system performance in order to determine energy balance of subsystem. DET spacecraft power system feeds output of solar photovoltaic array and nickel cadmium batteries directly to spacecraft bus. MPS system, Standard Power Regulator Unit (SPRU) utilized to operate array at array's peak power point. DET and MPS perform minute-by-minute simulation of performance of power system. Results of simulation focus mainly on output of solar array and characteristics of batteries. Both packages limited in terms of orbital mechanics, they have sufficient capability to calculate data on eclipses and performance of arrays for circular or near-circular orbits. DET and MPS written in FORTRAN-77 with some VAX FORTRAN-type extensions. Both available in three versions: GSC-13374, for DEC VAX-series computers running VMS. GSC-13443, for UNIX-based computers. GSC-13444, for Apple Macintosh computers.

  8. Managing complexity in simulations of land surface and near-surface processes

    DOE PAGES

    Coon, Ethan T.; Moulton, J. David; Painter, Scott L.

    2016-01-12

    Increasing computing power and the growing role of simulation in Earth systems science have led to an increase in the number and complexity of processes in modern simulators. We present a multiphysics framework that specifies interfaces for coupled processes and automates weak and strong coupling strategies to manage this complexity. Process management is enabled by viewing the system of equations as a tree, where individual equations are associated with leaf nodes and coupling strategies with internal nodes. A dynamically generated dependency graph connects a variable to its dependencies, streamlining and automating model evaluation, easing model development, and ensuring models aremore » modular and flexible. Additionally, the dependency graph is used to ensure that data requirements are consistent between all processes in a given simulation. Here we discuss the design and implementation of these concepts within the Arcos framework, and demonstrate their use for verification testing and hypothesis evaluation in numerical experiments.« less

  9. Gender, technology change and globalization: the case of China.

    PubMed

    Guo, H; Zhao, M

    1999-01-01

    This paper reviews the experience of women workers in China while the country's economy is changing into a globalized, technologically advanced one. New computer-based technology is increasingly acknowledged as a powerful and pervasive force that can shape or, at least in many ways, affect employment. It is hailed for opening up fresh employment opportunities and reducing the physical stress involved in work. However, the possibilities of redundancies or intensification of workload also exist. By focusing on changes in women's work, the article reveals the contradictions inherent in following a development path based on ever-higher levels of technology in the context of an intensive mode of production, to which productivity is the core value. The economy is bolstered and some workers gain employment in expanding industries. However, workers, who lack access to training and who are reliant on the dwindling state support for their reproductive responsibilities, are marginalized and seek employment in the growing informal economy.

  10. White House announces “big data” initiative

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2012-04-01

    The world is now generating zetabytes—which is 10 to the 21st power, or a billion trillion bytess—of information every year, according to John Holdren, director of the White House Office of Science and Technology Policy. With data volumes growing exponentially from a variety of sources such as computers running large-scale models, scientific instruments including telescopes and particle accelerators, and even online retail transactions, a key challenge is to better manage and utilize the data. The Big Data Research and Development Initiative, launched by the White House at a 29 March briefing, initially includes six federal departments and agencies providing more than $200 million in new commitments to improve tools and techniques for better accessing, organizing, and using data for scientific advances. The agencies and departments include the National Science Foundation (NSF), Department of Energy, U.S. Geological Survey (USGS), National Institutes of Health (NIH), Department of Defense, and Defense Advanced Research Projects Agency.

  11. Measuring surface flow velocity with smartphones: potential for citizen observatories

    NASA Astrophysics Data System (ADS)

    Weijs, Steven V.; Chen, Zichong; Brauchli, Tristan; Huwald, Hendrik

    2014-05-01

    Stream flow velocity is an important variable for discharge estimation and research on sediment dynamics. Given the influence of the latter on rating curves (stage-discharge relations), and the relative scarcity of direct streamflow measurements, surface velocity measurements can offer important information for, e.g., flood warning, hydropower, and hydrological science and engineering in general. With the growing amount of sensing and computing power in the hands of more outdoorsy individuals, and the advances in image processing techniques, there is now a tremendous potential to obtain hydrologically relevant data from motivated citizens. This is the main focus of the interdisciplinary "WeSenseIt" project, a citizen observatory of water. In this subproject, we investigate the feasibility of stream flow surface velocity measurements from movie clips taken by (smartphone-) cameras. First results from movie-clip derived velocity information will be shown and compared to reference measurements.

  12. A performance study of live VM migration technologies: VMotion vs XenMotion

    NASA Astrophysics Data System (ADS)

    Feng, Xiujie; Tang, Jianxiong; Luo, Xuan; Jin, Yaohui

    2011-12-01

    Due to the growing demand of flexible resource management for cloud computing services, researches on live virtual machine migration have attained more and more attention. Live migration of virtual machine across different hosts has been a powerful tool to facilitate system maintenance, load balancing, fault tolerance and so on. In this paper, we use a measurement-based approach to compare the performance of two major live migration technologies under certain network conditions, i.e., VMotion and XenMotion. The results show that VMotion generates much less data transferred than XenMotion when migrating identical VMs. However, in network with moderate packet loss and delay, which are typical in a VPN (virtual private network) scenario used to connect the data centers, XenMotion outperforms VMotion in total migration time. We hope that this study can be helpful in choosing suitable virtualization environments for data center administrators and optimizing existing live migration mechanisms.

  13. Computational Models of Protein Kinematics and Dynamics: Beyond Simulation

    PubMed Central

    Gipson, Bryant; Hsu, David; Kavraki, Lydia E.; Latombe, Jean-Claude

    2016-01-01

    Physics-based simulation represents a powerful method for investigating the time-varying behavior of dynamic protein systems at high spatial and temporal resolution. Such simulations, however, can be prohibitively difficult or lengthy for large proteins or when probing the lower-resolution, long-timescale behaviors of proteins generally. Importantly, not all questions about a protein system require full space and time resolution to produce an informative answer. For instance, by avoiding the simulation of uncorrelated, high-frequency atomic movements, a larger, domain-level picture of protein dynamics can be revealed. The purpose of this review is to highlight the growing body of complementary work that goes beyond simulation. In particular, this review focuses on methods that address kinematics and dynamics, as well as those that address larger organizational questions and can quickly yield useful information about the long-timescale behavior of a protein. PMID:22524225

  14. Using 3D infrared imaging to calibrate and refine computational fluid dynamic modeling for large computer and data centers

    NASA Astrophysics Data System (ADS)

    Stockton, Gregory R.

    2011-05-01

    Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.

  15. Computer memory power control for the Galileo spacecraft

    NASA Technical Reports Server (NTRS)

    Detwiler, R. C.

    1983-01-01

    The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.

  16. Virtualizing access to scientific applications with the Application Hosting Environment

    NASA Astrophysics Data System (ADS)

    Zasada, S. J.; Coveney, P. V.

    2009-12-01

    The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.

  17. AnRAD: A Neuromorphic Anomaly Detection Framework for Massive Concurrent Data Streams.

    PubMed

    Chen, Qiuwen; Luley, Ryan; Wu, Qing; Bishop, Morgan; Linderman, Richard W; Qiu, Qinru

    2018-05-01

    The evolution of high performance computing technologies has enabled the large-scale implementation of neuromorphic models and pushed the research in computational intelligence into a new era. Among the machine learning applications, unsupervised detection of anomalous streams is especially challenging due to the requirements of detection accuracy and real-time performance. Designing a computing framework that harnesses the growing computing power of the multicore systems while maintaining high sensitivity and specificity to the anomalies is an urgent research topic. In this paper, we propose anomaly recognition and detection (AnRAD), a bioinspired detection framework that performs probabilistic inferences. We analyze the feature dependency and develop a self-structuring method that learns an efficient confabulation network using unlabeled data. This network is capable of fast incremental learning, which continuously refines the knowledge base using streaming data. Compared with several existing anomaly detection approaches, our method provides competitive detection quality. Furthermore, we exploit the massive parallel structure of the AnRAD framework. Our implementations of the detection algorithm on the graphic processing unit and the Xeon Phi coprocessor both obtain substantial speedups over the sequential implementation on general-purpose microprocessor. The framework provides real-time service to concurrent data streams within diversified knowledge contexts, and can be applied to large problems with multiple local patterns. Experimental results demonstrate high computing performance and memory efficiency. For vehicle behavior detection, the framework is able to monitor up to 16000 vehicles (data streams) and their interactions in real time with a single commodity coprocessor, and uses less than 0.2 ms for one testing subject. Finally, the detection network is ported to our spiking neural network simulator to show the potential of adapting to the emerging neuromorphic architectures.

  18. Enhancement of DFT-calculations at petascale: Nuclear Magnetic Resonance, Hybrid Density Functional Theory and Car-Parrinello calculations

    NASA Astrophysics Data System (ADS)

    Varini, Nicola; Ceresoli, Davide; Martin-Samos, Layla; Girotto, Ivan; Cavazzoni, Carlo

    2013-08-01

    One of the most promising techniques used for studying the electronic properties of materials is based on Density Functional Theory (DFT) approach and its extensions. DFT has been widely applied in traditional solid state physics problems where periodicity and symmetry play a crucial role in reducing the computational workload. With growing compute power capability and the development of improved DFT methods, the range of potential applications is now including other scientific areas such as Chemistry and Biology. However, cross disciplinary combinations of traditional Solid-State Physics, Chemistry and Biology drastically improve the system complexity while reducing the degree of periodicity and symmetry. Large simulation cells containing of hundreds or even thousands of atoms are needed to model these kind of physical systems. The treatment of those systems still remains a computational challenge even with modern supercomputers. In this paper we describe our work to improve the scalability of Quantum ESPRESSO (Giannozzi et al., 2009 [3]) for treating very large cells and huge numbers of electrons. To this end we have introduced an extra level of parallelism, over electronic bands, in three kernels for solving computationally expensive problems: the Sternheimer equation solver (Nuclear Magnetic Resonance, package QE-GIPAW), the Fock operator builder (electronic ground-state, package PWscf) and most of the Car-Parrinello routines (Car-Parrinello dynamics, package CP). Final benchmarks show our success in computing the Nuclear Magnetic Response (NMR) chemical shift of a large biological assembly, the electronic structure of defected amorphous silica with hybrid exchange-correlation functionals and the equilibrium atomic structure of height Porphyrins anchored to a Carbon Nanotube, on many thousands of CPU cores.

  19. Preserving the Peach: Exploring Creativity in the Corporate Realm.

    ERIC Educational Resources Information Center

    Carrick, Moe

    2000-01-01

    Adventure consultation for businesses has the power and the tools to foster creative genius and grow corporate soul, to counteract the gravitational pull of corporate normalcy, referred to as the "corporate hairball." As the adventure consultant industry grows, it must beware of choking on its own hairballs. Five warning signs of…

  20. Grid Computing in K-12 Schools. Soapbox Digest. Volume 3, Number 2, Fall 2004

    ERIC Educational Resources Information Center

    AEL, 2004

    2004-01-01

    Grid computing allows large groups of computers (either in a lab, or remote and connected only by the Internet) to extend extra processing power to each individual computer to work on components of a complex request. Grid middleware, recognizing priorities set by systems administrators, allows the grid to identify and use this power without…

  1. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  2. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE PAGES

    Molzahn, Daniel K.

    2017-03-15

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  3. Evaluation of the Lattice-Boltzmann Equation Solver PowerFLOW for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Luo, Li-Shi; Singer, Bart A.; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    A careful comparison of the performance of a commercially available Lattice-Boltzmann Equation solver (Power-FLOW) was made with a conventional, block-structured computational fluid-dynamics code (CFL3D) for the flow over a two-dimensional NACA-0012 airfoil. The results suggest that the version of PowerFLOW used in the investigation produced solutions with large errors in the computed flow field; these errors are attributed to inadequate resolution of the boundary layer for reasons related to grid resolution and primitive turbulence modeling. The requirement of square grid cells in the PowerFLOW calculations limited the number of points that could be used to span the boundary layer on the wing and still keep the computation size small enough to fit on the available computers. Although not discussed in detail, disappointing results were also obtained with PowerFLOW for a cavity flow and for the flow around a generic helicopter configuration.

  4. Energy production for environmental issues in Turkey

    NASA Astrophysics Data System (ADS)

    Yuksel, Ibrahim; Arman, Hasan; Halil Demirel, Ibrahim

    2017-11-01

    Due to the diversification efforts of energy sources, use of natural gas that was newly introduced into Turkish economy, has been growing rapidly. Turkey has large reserves of coal, particularly of lignite. The proven lignite reserves are 8.0 billion tons. The estimated total possible reserves are 30 billion tons. Turkey, with its young population and growing energy demand per person, its fast growing urbanization, and its economic development, has been one of the fast growing power markets of the world for the last two decades. It is expected that the demand for electric energy in Turkey will be 580 billion kWh by the year 2020. Turkey's electric energy demand is growing about 6-8% yearly due to fast economic growing. This paper deals with energy demand and consumption for environmental issues in Turkey.

  5. Collaborative Autonomous Unmanned Aerial - Ground Vehicle Systems for Field Operations

    DTIC Science & Technology

    2007-08-31

    very limited payload capabilities of small UVs, sacrificing minimal computational power and run time, adhering at the same time to the low cost...configuration has been chosen because of its high computational capabilities, low power consumption, multiple I/O ports, size, low heat emission and cost. This...due to their high power to weight ratio, small packaging, and wide operating temperatures. Power distribution is controlled by the 120 Watt ATX power

  6. Advanced Computing Methods for Knowledge Discovery and Prognosis in Acoustic Emission Monitoring

    ERIC Educational Resources Information Center

    Mejia, Felipe

    2012-01-01

    Structural health monitoring (SHM) has gained significant popularity in the last decade. This growing interest, coupled with new sensing technologies, has resulted in an overwhelming amount of data in need of management and useful interpretation. Acoustic emission (AE) testing has been particularly fraught by the problem of growing data and is…

  7. Cybernation and Man--A Course Development Project; Report Number 2. Final Report.

    ERIC Educational Resources Information Center

    Dionne, Edward A.; Parkman, Ralph

    Transportation, Agriculture, and Art are used as examples to establish how technology, especially computer-based technology, is involved in the life of man. Cities grow in population very rapidly; motor vehicles grow in number even more rapidly; both citcumstances raise problems--traffic congestion, economic loss, air pollution, urban…

  8. Current implementation and future plans on new code architecture, programming language and user interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brun, B.

    1997-07-01

    Computer technology has improved tremendously during the last years with larger media capacity, memory and more computational power. Visual computing with high-performance graphic interface and desktop computational power have changed the way engineers accomplish everyday tasks, development and safety studies analysis. The emergence of parallel computing will permit simulation over a larger domain. In addition, new development methods, languages and tools have appeared in the last several years.

  9. Show me the road to hydrogen : UTC/transportation fuel research and development

    DOT National Transportation Integrated Search

    2007-01-01

    Hydrogen-powered fuel is an emerging technology that provides an alternative source of fuel to fossil fuel. Commercially viable technologies are emerging that are expected to allow for consumer vehicles powered by hydrogen as part of a growing hydrog...

  10. Evaluation of HRI Payloads for Rapid Precision Target Localization to Provide Information to the Tactical Warfighter

    DTIC Science & Technology

    2011-09-01

    supply for the IMU  switching 5, 12V ATX power supply for the computer and hard drive  An L1/L2 active antenna on small back plane  USB to serial...switching 5, 12V ATX power supply for the computer and hard drive Figure 4. UAS Target Location Technology for Ground Based Observers (TLGBO...15V power supply for the IMU H. switching 5, 12V ATX power supply for the computer & hard drive I. An L1/L2 active antenna on a small back

  11. China’s Soft Power and Growing Influence in Southeast Asia

    DTIC Science & Technology

    2008-03-01

    appropriateness and positive or negative effects generated. In more recent times, China has had a diplomatic makeover and has begun utilizing its soft power...contexts, the United States is the focus of debate over its use of or lack of soft power and the appropriateness and positive or negative effects ...producer’s perspective but the receiver’s view of soft power that is essential. To determine the effectiveness of soft power, an analysis must be made of

  12. The NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform to Support the Analysis of Petascale Environmental Data Collections

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.

    2014-12-01

    The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that software is both upgradable and maintainable, and can be readily reused with complexly integrated systems and become part of the growing global trusted community tools for cross-disciplinary research.

  13. SeTES, a Self-Teaching Expert System for the analysis, design and prediction of gas production from shales and a prototype for a new generation of Expert Systems in the Earth Sciences

    NASA Astrophysics Data System (ADS)

    Kuzma, H. A.; Boyle, K.; Pullman, S.; Reagan, M. T.; Moridis, G. J.; Blasingame, T. A.; Rector, J. W.; Nikolaou, M.

    2010-12-01

    A Self Teaching Expert System (SeTES) is being developed for the analysis, design and prediction of gas production from shales. An Expert System is a computer program designed to answer questions or clarify uncertainties that its designers did not necessarily envision which would otherwise have to be addressed by consultation with one or more human experts. Modern developments in computer learning, data mining, database management, web integration and cheap computing power are bringing the promise of expert systems to fruition. SeTES is a partial successor to Prospector, a system to aid in the identification and evaluation of mineral deposits developed by Stanford University and the USGS in the late 1970s, and one of the most famous early expert systems. Instead of the text dialogue used in early systems, the web user interface of SeTES helps a non-expert user to articulate, clarify and reason about a problem by navigating through a series of interactive wizards. The wizards identify potential solutions to queries by retrieving and combining together relevant records from a database. Inferences, decisions and predictions are made from incomplete and noisy inputs using a series of probabilistic models (Bayesian Networks) which incorporate records from the database, physical laws and empirical knowledge in the form of prior probability distributions. The database is mainly populated with empirical measurements, however an automatic algorithm supplements sparse data with synthetic data obtained through physical modeling. This constitutes the mechanism for how SeTES self-teaches. SeTES’ predictive power is expected to grow as users contribute more data into the system. Samples are appropriately weighted to favor high quality empirical data over low quality or synthetic data. Finally, a set of data visualization tools digests the output measurements into graphical outputs.

  14. Power API Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-12-04

    The software serves two purposes. The first purpose of the software is to prototype the Sandia High Performance Computing Power Application Programming Interface Specification effort. The specification can be found at http://powerapi.sandia.gov . Prototypes of the specification were developed in parallel with the development of the specification. Release of the prototype will be instructive to anyone who intends to implement the specification. More specifically, our vendor collaborators will benefit from the availability of the prototype. The second is in direct support of the PowerInsight power measurement device, which was co-developed with Penguin Computing. The software provides a cluster wide measurementmore » capability enabled by the PowerInsight device. The software can be used by anyone who purchases a PowerInsight device. The software will allow the user to easily collect power and energy information of a node that is instrumented with PowerInsight. The software can also be used as an example prototype implementation of the High Performance Computing Power Application Programming Interface Specification.« less

  15. Scenarios for optimizing potato productivity in a lunar CELSS

    NASA Technical Reports Server (NTRS)

    Wheeler, R. M.; Morrow, R. C.; Tibbitts, T. W.; Bula, R. J.

    1992-01-01

    The use of controlled ecological life support system (CELSS) in the development and growth of large-scale bases on the Moon will reduce the expense of supplying life support materials from Earth. Such systems would use plants to produce food and oxygen, remove carbon dioxide, and recycle water and minerals. In a lunar CELSS, several factors are likely to be limiting to plant productivity, including the availability of growing area, electrical power, and lamp/ballast weight for lighting systems. Several management scenarios are outlined in this discussion for the production of potatoes based on their response to irradiance, photoperiod, and carbon dioxide concentration. Management scenarios that use 12-hr photoperiods, high carbon dioxide concentrations, and movable lamp banks to alternately irradiate halves of the growing area appear to be the most efficient in terms of growing area, electrical power, and lamp weights. However, the optimal scenario will be dependent upon the relative 'costs' of each factor.

  16. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE PAGES

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...

    2018-03-22

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  17. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  18. Computer modeling and simulators as part of university training for NPP operating personnel

    NASA Astrophysics Data System (ADS)

    Volman, M.

    2017-01-01

    This paper considers aspects of a program for training future nuclear power plant personnel developed by the NPP Department of Ivanovo State Power Engineering University. Computer modeling is used for numerical experiments on the kinetics of nuclear reactors in Mathcad. Simulation modeling is carried out on the computer and full-scale simulator of water-cooled power reactor for the simulation of neutron-physical reactor measurements and the start-up - shutdown process.

  19. Manyscale Computing for Sensor Processing in Support of Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.

    2014-09-01

    Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include performance analysis and results in terms of execution time as well as storage, power, and energy consumption for bus-connected and/or networked architectures. The feasibility of the manyscale paradigm is demonstrated by addressing four principal challenges: (1) architectural/structural diversity, parallelism, and locality, (2) masking of I/O and memory latencies, (3) scalability of design as well as implementation, and (4) efficient representation/expression of parallel applications. Examples will demonstrate how manyscale computing helps solve these challenges efficiently on real-world computing systems.

  20. Can Designing Self-Representations through Creative Computing Promote an Incremental View of Intelligence and Enhance Creativity among At-Risk Youth?

    ERIC Educational Resources Information Center

    Blau, Ina; Benolol, Nurit

    2016-01-01

    Creative computing is one of the rapidly growing educational trends around the world. Previous studies have shown that creative computing can empower disadvantaged children and youth. At-risk youth tend to hold a negative view of self and perceive their abilities as inferior compared to "normative" pupils. The Implicit Theories of…

  1. Computer Games Are Fun? On Professional Games and Players' Motivations

    ERIC Educational Resources Information Center

    Eglesz, Denes; Fekete, Istvan; Kiss, Orhidea Edith; Izso, Lajos

    2005-01-01

    As computer games are becoming more widespread, there is a tendency for young people to spend a growing amount of time playing games. The first part of this paper will deal with various types of computer games and their characteristic features. In the second part we show the results of our recent surveys. We examined the motivations of young…

  2. Towards a Versatile Tele-Education Platform for Computer Science Educators Based on the Greek School Network

    ERIC Educational Resources Information Center

    Paraskevas, Michael; Zarouchas, Thomas; Angelopoulos, Panagiotis; Perikos, Isidoros

    2013-01-01

    Now days the growing need for highly qualified computer science educators in modern educational environments is commonplace. This study examines the potential use of Greek School Network (GSN) to provide a robust and comprehensive e-training course for computer science educators in order to efficiently exploit advanced IT services and establish a…

  3. An Introduced Hybrid Graphene/Polyaniline Composites for Improvement of Supercapacitor

    NASA Astrophysics Data System (ADS)

    Tayel, Mazhar B.; Soliman, Moataz M.; Ebrahim, Shaker; Harb, Mohamed E.

    2016-01-01

    Supercapacitors represent an attractive alternative for portable electronics and automotive applications due to their high capacitance, specific power and extended life. In fact, the growing demand of portable systems and hybrid electric vehicles, memory protection in complementary metal-oxide-semiconductor (CMOS), logic circuit, videocassette recorders (VCRs), compact disc (CD) players, personal computers (PCs), uninterruptible power supply (UPS) in security alarm systems, remote sensing, smoke detectors, etc. require high power in short-term pulses. Therefore, in the last 20 years, supercapacitors have been required for the development of large and small devices driven by electrical power. In this paper, graphene oxide (GO) was synthesized by improved Hummers method. Two polyaniline (PANI)/graphene oxide nanocomposites electrode materials were prepared from aniline, GO and ammoniumpersulfate (APS) by in situ chemical polymerization with the mass ratios (mGO:mAniline) 10:90 and 30: 70 in ice bath. The crystal structure and the surface topography of all materials were characterized by means of x-ray diffraction (XRD), Fourier transform infrared spectrum (FTIR), Raman spectroscopy and scanning electron microscopy (SEM). The electrochemical properties of the composites were evaluated by cyclic voltammetry (CV), charge-discharge measurements and electrical impedance spectroscopy (EIS), respectively. The results show that the composites have similar and enhanced cyclic voltammetry performance compared with pure PANI based electrode material. The graphene/PANI composite synthesized with the mass ratio (mANI:mGO) 90:10 possessed good capacitive behavior with a specific capacitance as high as 1509.35 F/g at scan rate of 1 mV/s in scanning potential window from -0.8 V to 0.8 V.

  4. Economic crisis helps to "demarginalize" women.

    PubMed

    Forje, C L

    1998-05-01

    This article discusses processes that demarginalize women in developing countries. The case study pertained to women in the Santa coffee growing area of North West Province, Cameroon. Coffee is the main source of income for families. Women obtain land for growing subsistence crops with the permission of their husbands and with pleading. The sharp fall in coffee prices left families in economic difficulties. It forced families to reduce coffee production or abandon coffee production entirely. Women found an alternative in growing vegetables for retail sale. The women formed associations that provided instruction on how to farm, sell produce, and obtain credit. Men observed the increase in income from the women's sale of produce. Women included the men in discussions about their progress and difficulties. Men were thus encouraged to sell their land to women. Women gained power by becoming the sole source of family income. The women's groups helped women improve land use and the use of income. This experience proved that crises can have positive outcomes. Men's power was based on economic control over resources rather than culture or machismo. Women's cooperation with men in nation building resulted in economic independence. Marginalization of women was based on money. Women proved that participation, rather than power, was the appropriate means to social change and more equitable relations. The obstacles to women's power included cultural expectations about their roles and choices, sex discrimination, and lack of access to leadership networks.

  5. Recent Trends in Spintronics-Based Nanomagnetic Logic

    NASA Astrophysics Data System (ADS)

    Das, Jayita; Alam, Syed M.; Bhanja, Sanjukta

    2014-09-01

    With the growing concerns of standby power in sub-100-nm CMOS technologies, alternative computing techniques and memory technologies are explored. Spin transfer torque magnetoresistive RAM (STT-MRAM) is one such nonvolatile memory relying on magnetic tunnel junctions (MTJs) to store information. It uses spin transfer torque to write information and magnetoresistance to read information. In 2012, Everspin Technologies, Inc. commercialized the first 64Mbit Spin Torque MRAM. On the computing end, nanomagnetic logic (NML) is a promising technique with zero leakage and high data retention. In 2000, Cowburn and Welland first demonstrated its potential in logic and information propagation through magnetostatic interaction in a chain of single domain circular nanomagnetic dots of Supermalloy (Ni80Fe14Mo5X1, X is other metals). In 2006, Imre et al. demonstrated wires and majority gates followed by coplanar cross wire systems demonstration in 2010 by Pulecio et al. Since 2004 researchers have also investigated the potential of MTJs in logic. More recently with dipolar coupling between MTJs demonstrated in 2012, logic-in-memory architecture with STT-MRAM have been investigated. The architecture borrows the computing concept from NML and read and write style from MRAM. The architecture can switch its operation between logic and memory modes with clock as classifier. Further through logic partitioning between MTJ and CMOS plane, a significant performance boost has been observed in basic computing blocks within the architecture. In this work, we have explored the developments in NML, in MTJs and more recent developments in hybrid MTJ/CMOS logic-in-memory architecture and its unique logic partitioning capability.

  6. An Automated Method to Compute Orbital Re-entry Trajectories with Heating Constraints

    NASA Technical Reports Server (NTRS)

    Zimmerman, Curtis; Dukeman, Greg; Hanson, John; Fogle, Frank R. (Technical Monitor)

    2002-01-01

    Determining how to properly manipulate the controls of a re-entering re-usable launch vehicle (RLV) so that it is able to safely return to Earth and land involves the solution of a two-point boundary value problem (TPBVP). This problem, which can be quite difficult, is traditionally solved on the ground prior to flight. If necessary, a nearly unlimited amount of time is available to find the 'best' solution using a variety of trajectory design and optimization tools. The role of entry guidance during flight is to follow the pre- determined reference solution while correcting for any errors encountered along the way. This guidance method is both highly reliable and very efficient in terms of onboard computer resources. There is a growing interest in a style of entry guidance that places the responsibility of solving the TPBVP in the actual entry guidance flight software. Here there is very limited computer time. The powerful, but finicky, mathematical tools used by trajectory designers on the ground cannot in general be converted to do the job. Non-convergence or slow convergence can result in disaster. The challenges of designing such an algorithm are numerous and difficult. Yet the payoff (in the form of decreased operational costs and increased safety) can be substantiaL This paper presents an algorithm that incorporates features of both types of guidance strategies. It takes an initial RLV orbital re-entry state and finds a trajectory that will safely transport the vehicle to Earth. During actual flight, the computed trajectory is used as the reference to be flown by a more traditional guidance method.

  7. Power system requirements and definition for lunar and Mars outposts

    NASA Technical Reports Server (NTRS)

    Petri, D. A.; Cataldo, R. L.; Bozek, J. M.

    1990-01-01

    Candidate power systems being considered for outpost facilities (stationary power systems) and vehicles (mobile systems) are discussed, including solar, chemical, isotopic, and reactor. The current power strategy was an initial outpost power system composed of photovoltaic arrays for daytime energy needs and regenerative fuel cells for power during the long lunar night. As day and night power demands grow, the outpost transitions to nuclear-based power generation, using thermoelectric conversion initially and evolving to a dynamic conversion system. With this concept as a guideline, a set of requirements has been established, and a reference definition of candidate power systems meeting these requirements has been identified.

  8. Ab Initio Assessment of the Thermoelectric Performance of Ruthenium-Doped Gadolinium Orthotantalate

    NASA Technical Reports Server (NTRS)

    Goldsby, Jon

    2016-01-01

    Solid state energy harvesting using waste heat available in gas turbine engine, offers potential for power generation to meet growing power needs of aircraft. Thermoelectric material advances offer new opportunities. Weight-optimized integrated turbine engine structure incorporating energy conversion devices.

  9. Software Support for Transiently Powered Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Der Woude, Joel Matthew

    With the continued reduction in size and cost of computing, power becomes an increasingly heavy burden on system designers for embedded applications. While energy harvesting techniques are an increasingly desirable solution for many deeply embedded applications where size and lifetime are a priority, previous work has shown that energy harvesting provides insufficient power for long running computation. We present Ratchet, which to the authors knowledge is the first automatic, software-only checkpointing system for energy harvesting platforms. We show that Ratchet provides a means to extend computation across power cycles, consistent with those experienced by energy harvesting devices. We demonstrate themore » correctness of our system under frequent failures and show that it has an average overhead of 58.9% across a suite of benchmarks representative for embedded applications.« less

  10. Computer program for design and performance analysis of navigation-aid power systems. Program documentation. Volume 1: Software requirements document

    NASA Technical Reports Server (NTRS)

    Goltz, G.; Kaiser, L. M.; Weiner, H.

    1977-01-01

    A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.

  11. Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan

    From the inception of power systems, synchronous machines have acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, power electronics interfaces are playing a growing role as they are the primary interface for several types of renewable energy sources and storage technologies. As the role of power electronics in systems continues to grow, it is crucial to investigate the properties of bulk power systems in low inertia settings. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator,more » three-phase inverter, and a load. Furthermore, the inverter model is formulated such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings and, hence, differing levels of inertia. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the interaction between the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less

  12. Syringeless power injector versus dual-syringe power injector: economic evaluation of user performance, the impact on contrast enhanced computed tomography (CECT) workflow exams, and hospital costs.

    PubMed

    Colombo, Giorgio L; Andreis, Ivo A Bergamo; Di Matteo, Sergio; Bruno, Giacomo M; Mondellini, Claudio

    2013-01-01

    The utilization of diagnostic imaging has substantially increased over the past decade in Europe and North America and continues to grow worldwide. The purpose of this study was to develop an economic evaluation of a syringeless power injector (PI) versus a dual-syringe PI for contrast enhanced computed tomography (CECT) in a hospital setting. Patients (n=2379) were enrolled at the Legnano Hospital between November 2012 and January 2013. They had been referred to the hospital for a CECT analysis and were randomized into two groups. The first group was examined with a 256-MDCT (MultiDetector Computed Tomography) scanner using a syringeless power injector, while the other group was examined with a 64-MDCT scanner using a dual-syringe. Data on the operators' time required in the patient analysis steps as well as on the quantity of consumable materials used were collected. The radiologic technologists' satisfaction with the use of the PIs was rated on a 10-point scale. A budget impact analysis and sensitivity analysis were performed under the base-case scenario. A total of 1,040 patients were examined using the syringeless system, and 1,339 with the dual-syringe system; the CECT examination quality was comparable for both PI systems. Equipment preparation time and releasing time per examination for syringeless PIs versus dual-syringe PIs were 100±30 versus 180±30 seconds and 90±30 and 140±20 seconds, respectively. On average, 10±3 mL of contrast media (CM) wastage per examination was observed with the dual-syringe PI and 0±1 mL with the syringeless PI. Technologists had higher satisfaction with the syringeless PI than with the dual-syringe system (8.8 versus 8.0). The syringeless PI allows a saving of about €6.18 per patient, both due to the lower cost of the devices and to the better performance of the syringeless system. The univariate sensitivity analysis carried out on the base-case results within the standard deviation range confirmed the saving generated by using the syringeless device, with saving values between €5.40 and €6.20 per patient. The syringeless PI was found to be more user-friendly and efficient, minimizing contrast wastage and providing similar contrast enhancement quality compared to the dual-syringe injector, with comparable CECT examination quality.

  13. Low-Carbon Computing

    ERIC Educational Resources Information Center

    Hignite, Karla

    2009-01-01

    Green information technology (IT) is grabbing more mainstream headlines--and for good reason. Computing, data processing, and electronic file storage collectively account for a significant and growing share of energy consumption in the business world and on higher education campuses. With greater scrutiny of all activities that contribute to an…

  14. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2012-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.

  15. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  16. Parallelized Seeded Region Growing Using CUDA

    PubMed Central

    Park, Seongjin; Lee, Hyunna; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests. PMID:25309619

  17. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads.

    PubMed

    Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-05-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.

  18. Hard Copy Market Overview

    NASA Astrophysics Data System (ADS)

    Testan, Peter R.

    1987-04-01

    A number of Color Hard Copy (CHC) market drivers are currently indicating strong growth in the use of CHC technologies for the business graphics marketplace. These market drivers relate to product, software, color monitors and color copiers. The use of color in business graphics allows more information to be relayed than is normally the case in a monochrome format. The communicative powers of full-color computer generated output in the business graphics application area will continue to induce end users to desire and require color in their future applications. A number of color hard copy technologies will be utilized in the presentation graphics arena. Thermal transfer, ink jet, photographic and electrophotographic technologies are all expected to be utilized in the business graphics presentation application area in the future. Since the end of 1984, the availability of color application software packages has grown significantly. Sales revenue generated by business graphics software is expected to grow at a compound annual growth rate of just over 40 percent to 1990. Increased availability of packages to allow the integration of text and graphics is expected. Currently, the latest versions of page description languages such as Postscript, Interpress and DDL all support color output. The use of color monitors will also drive the demand for color hard copy in the business graphics market place. The availability of higher resolution screens is allowing color monitors to be easily used for both text and graphics applications in the office environment. During 1987, the sales of color monitors are expected to surpass the sales of monochrome monitors. Another major color hard copy market driver will be the color copier. In order to take advantage of the communications power of computer generated color output, multiple copies are required for distribution. Product introductions of a new generation of color copiers is now underway with additional introductions expected during 1987. The color hard copy market continues to be in a state of constant change, typical of any immature market. However, much of the change is positive. During 1985, the color hard copy market generated 1.2 billion. By 1990, total market revenue is expected to exceed 5.5 billion. The business graphics CHC application area is expected to grow at a compound annual growth rate greater than 40 percent to 1990.

  19. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 2: SYSTID user's guide

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The manual for the use of the computer program SYSTID under the Univac operating system is presented. The computer program is used in the simulation and evaluation of the space shuttle orbiter electric power supply. The models described in the handbook are those which were available in the original versions of SYSTID. The subjects discussed are: (1) program description, (2) input language, (3) node typing, (4) problem submission, and (5) basic and power system SYSTID libraries.

  20. Computed lateral rate and acceleration power spectral response of conventional and STOL airplanes to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1975-01-01

    Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.

  1. Small Universal Bacteria and Plasmid Computing Systems.

    PubMed

    Wang, Xun; Zheng, Pan; Ma, Tongmao; Song, Tao

    2018-05-29

    Bacterial computing is a known candidate in natural computing, the aim being to construct "bacterial computers" for solving complex problems. In this paper, a new kind of bacterial computing system, named the bacteria and plasmid computing system (BP system), is proposed. We investigate the computational power of BP systems with finite numbers of bacteria and plasmids. Specifically, it is obtained in a constructive way that a BP system with 2 bacteria and 34 plasmids is Turing universal. The results provide a theoretical cornerstone to construct powerful bacterial computers and demonstrate a concept of paradigms using a "reasonable" number of bacteria and plasmids for such devices.

  2. Energy Awareness and Scheduling in Mobile Devices and High End Computing

    ERIC Educational Resources Information Center

    Pawaskar, Sachin S.

    2013-01-01

    In the context of the big picture as energy demands rise due to growing economies and growing populations, there will be greater emphasis on sustainable supply, conservation, and efficient usage of this vital resource. Even at a smaller level, the need for minimizing energy consumption continues to be compelling in embedded, mobile, and server…

  3. Maximizing Computational Capability with Minimal Power

    DTIC Science & Technology

    2009-03-01

    Chip -Scale Energy and Power... and Heat Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...OpticalBench Mounting Posts Imager Chip LCDinterfaced withthecomputer P o l a r i z e r P o l a r i z e r XYZ Translator Optical Slide VMM Computational Pixel...Signal routing power / memory: ? Power does not include comm off chip (i.e. accessing memory) Power = ½ C Vdd2 f for CMOS Chip to Chip (10pF load min

  4. 78 FR 47805 - Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-06

    ... Documents Access and Management System (ADAMS): You may access publicly available documents online in the... Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants,'' issued for... Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION: Revision...

  5. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...

  6. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...

  7. 76 FR 40943 - Notice of Issuance of Regulatory Guide

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-12

    ..., Revision 3, ``Criteria for Use of Computers in Safety Systems of Nuclear Power Plants.'' FOR FURTHER..., ``Criteria for Use of Computers in Safety Systems of Nuclear Power Plants,'' was issued with a temporary... Fuel Reprocessing Plants,'' to 10 CFR part 50 with regard to the use of computers in safety systems of...

  8. Unity Power Factor Operated PFC Converter Based Power Supply for Computers

    NASA Astrophysics Data System (ADS)

    Singh, Shikha; Singh, Bhim; Bhuvaneswari, G.; Bist, Vashist

    2017-11-01

    Power Supplies (PSs) employed in personal computers pollute the single phase ac mains by drawing distorted current at a substandard Power Factor (PF). The harmonic distortion of the supply current in these personal computers are observed 75% to 90% with the Crest Factor (CF) being very high which escalates losses in the distribution system. To find a tangible solution to these issues, a non-isolated PFC converter is employed at the input of isolated converter that is capable of improving the input power quality apart from regulating the dc voltage at its output. This is given to the isolated stage that yields completely isolated and stiffly regulated multiple output voltages which is the prime requirement of computer PS. The operation of the proposed PS is evaluated under various operating conditions and the results show improved performance depicting nearly unity PF and low input current harmonics. The prototype of this PS is developed in laboratory environment and test results are recorded which corroborate the power quality improvement observed in simulation results under various operating conditions.

  9. Saudi Arabias Implementation of Soft Power Policy to Confront Irans Obvious Threats

    DTIC Science & Technology

    2015-12-01

    OBVIOUS THREATS Abdullah Khuliyf A. Alanazi Colonel, Saudi Arabia National Guard B.S., King Khalid Military College, 1993 Submitted in partial...inevitably growing power, particularly in the Persian Gulf, and throughout the Middle East. Such a situation advocates utilizing soft power as a new...www.educause.edu/ir/library/ pdf /ffpiu043. pdf , 34. 2 Giulio Gallarotti and Isam Yahia Al-Filali, “Saudi Arabia’s Soft Power.” International Studies 49

  10. Designing and Testing Energy Harvesters Suitable for Renewable Power Sources

    NASA Astrophysics Data System (ADS)

    Synkiewicz, B.; Guzdek, P.; Piekarski, J.; Zaraska, K.

    2016-01-01

    Energy harvesters convert waste power (heat, light and vibration) directly to electric power . Fast progress in their technology, design and areas of application (e.g. “Internet of Things”) has been observed recently. Their effectiveness is steadily growing which makes their application to powering sensor networks with wireless data transfer reasonable. The main advantage is the independence from wired power sources, which is especially important for monitoring state of environmental parameters. In this paper we describe the design and realization of a gas sensor monitoring CO level (powered by TEG) and two, designed an constructed in ITE, autonomous power supply modules powered by modern photovoltaic cells.

  11. Computer optimization of reactor-thermoelectric space power systems

    NASA Technical Reports Server (NTRS)

    Maag, W. L.; Finnegan, P. M.; Fishbach, L. H.

    1973-01-01

    A computer simulation and optimization code that has been developed for nuclear space power systems is described. The results of using this code to analyze two reactor-thermoelectric systems are presented.

  12. The Influence of Computer-Mediated Communication Systems on Community

    ERIC Educational Resources Information Center

    Rockinson-Szapkiw, Amanda J.

    2012-01-01

    As higher education institutions enter the intense competition of the rapidly growing global marketplace of online education, the leaders within these institutions are challenged to identify factors critical for developing and for maintaining effective online courses. Computer-mediated communication (CMC) systems are considered critical to…

  13. Fluid particles only separate exponentially in the dissipation range of turbulence after extremely long times

    NASA Astrophysics Data System (ADS)

    Dhariwal, Rohit; Bragg, Andrew D.

    2018-03-01

    In this paper, we consider how the statistical moments of the separation between two fluid particles grow with time when their separation lies in the dissipation range of turbulence. In this range, the fluid velocity field varies smoothly and the relative velocity of two fluid particles depends linearly upon their separation. While this may suggest that the rate at which fluid particles separate is exponential in time, this is not guaranteed because the strain rate governing their separation is a strongly fluctuating quantity in turbulence. Indeed, Afik and Steinberg [Nat. Commun. 8, 468 (2017), 10.1038/s41467-017-00389-8] argue that there is no convincing evidence that the moments of the separation between fluid particles grow exponentially with time in the dissipation range of turbulence. Motivated by this, we use direct numerical simulations (DNS) to compute the moments of particle separation over very long periods of time in a statistically stationary, isotropic turbulent flow to see if we ever observe evidence for exponential separation. Our results show that if the initial separation between the particles is infinitesimal, the moments of the particle separation first grow as power laws in time, but we then observe convincing evidence that at sufficiently long times the moments do grow exponentially. However, this exponential growth is only observed after extremely long times ≳200 τη , where τη is the Kolmogorov time scale. This is due to fluctuations in the strain rate about its mean value measured along the particle trajectories, the effect of which on the moments of the particle separation persists for very long times. We also consider the backward-in-time (BIT) moments of the article separation, and observe that they too grow exponentially in the long-time regime. However, a dramatic consequence of the exponential separation is that at long times the difference between the rate of the particle separation forward in time (FIT) and BIT grows exponentially in time, leading to incredibly strong irreversibility in the dispersion. This is in striking contrast to the irreversibility of their relative dispersion in the inertial range, where the difference between FIT and BIT is constant in time according to Richardson's phenomenology.

  14. Teachers' Experiences of Using Eye Gaze-Controlled Computers for Pupils with Severe Motor Impairments and without Speech

    ERIC Educational Resources Information Center

    Rytterström, Patrik; Borgestig, Maria; Hemmingsson, Helena

    2016-01-01

    The purpose of this study is to explore teachers' experiences of using eye gaze-controlled computers with pupils with severe disabilities. Technology to control a computer with eye gaze is a fast growing field and has promising implications for people with severe disabilities. This is a new assistive technology and a new learning situation for…

  15. Computer science security research and human subjects: emerging considerations for research ethics boards.

    PubMed

    Buchanan, Elizabeth; Aycock, John; Dexter, Scott; Dittrich, David; Hvizdak, Erin

    2011-06-01

    This paper explores the growing concerns with computer science research, and in particular, computer security research and its relationship with the committees that review human subjects research. It offers cases that review boards are likely to confront, and provides a context for appropriate consideration of such research, as issues of bots, clouds, and worms enter the discourse of human subjects review.

  16. Energy Efficiency Maximization of Practical Wireless Communication Systems

    NASA Astrophysics Data System (ADS)

    Eraslan, Eren

    Energy consumption of the modern wireless communication systems is rapidly growing due to the ever-increasing data demand and the advanced solutions employed in order to address this demand, such as multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) techniques. These MIMO systems are power hungry, however, they are capable of changing the transmission parameters, such as number of spatial streams, number of transmitter/receiver antennas, modulation, code rate, and transmit power. They can thus choose the best mode out of possibly thousands of modes in order to optimize an objective function. This problem is referred to as the link adaptation problem. In this work, we focus on the link adaptation for energy efficiency maximization problem, which is defined as choosing the optimal transmission mode to maximize the number of successfully transmitted bits per unit energy consumed by the link. We model the energy consumption and throughput performances of a MIMO-OFDM link and develop a practical link adaptation protocol, which senses the channel conditions and changes its transmission mode in real-time. It turns out that the brute force search, which is usually assumed in previous works, is prohibitively complex, especially when there are large numbers of transmit power levels to choose from. We analyze the relationship between the energy efficiency and transmit power, and prove that energy efficiency of a link is a single-peaked quasiconcave function of transmit power. This leads us to develop a low-complexity algorithm that finds a near-optimal transmit power and take this dimension out of the search space. We further prune the search space by analyzing the singular value decomposition of the channel and excluding the modes that use higher number of spatial streams than the channel can support. These algorithms and our novel formulations provide simpler computations and limit the search space into a much smaller set; hence reducing the computational complexity by orders of magnitude without sacrificing the performance. The result of this work is a highly practical link adaptation protocol for maximizing the energy efficiency of modern wireless communication systems. Simulation results show orders of magnitude gain in the energy efficiency of the link. We also implemented the link adaptation protocol on real-time MIMO-OFDM radios and we report on the experimental results. To the best of our knowledge, this is the first reported testbed that is capable of performing energy-efficient fast link adaptation using PHY layer information.

  17. Does Stevens's Power Law for Brightness Extend to Perceptual Brightness Averaging?

    ERIC Educational Resources Information Center

    Bauer, Ben

    2009-01-01

    Stevens's power law ([Psi][infinity][Phi][beta]) captures the relationship between physical ([Phi]) and perceived ([Psi]) magnitude for many stimulus continua (e.g., luminance and brightness, weight and heaviness, area and size). The exponent ([beta]) indicates whether perceptual magnitude grows more slowly than physical magnitude ([beta] less…

  18. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  19. Geometric Heat Engines Featuring Power that Grows with Efficiency.

    PubMed

    Raz, O; Subaşı, Y; Pugatch, R

    2016-04-22

    Thermodynamics places a limit on the efficiency of heat engines, but not on their output power or on how the power and efficiency change with the engine's cycle time. In this Letter, we develop a geometrical description of the power and efficiency as a function of the cycle time, applicable to an important class of heat engine models. This geometrical description is used to design engine protocols that attain both the maximal power and maximal efficiency at the fast driving limit. Furthermore, using this method, we also prove that no protocol can exactly attain the Carnot efficiency at nonzero power.

  20. The Next Wave: Humans, Computers, and Redefining Reality

    NASA Technical Reports Server (NTRS)

    Little, William

    2018-01-01

    The Augmented/Virtual Reality (AVR) Lab at KSC is dedicated to " exploration into the growing computer fields of Extended Reality and the Natural User Interface (it is) a proving ground for new technologies that can be integrated into future NASA projects and programs." The topics of Human Computer Interface, Human Computer Interaction, Augmented Reality, Virtual Reality, and Mixed Reality are defined; examples of work being done in these fields in the AVR Lab are given. Current new and future work in Computer Vision, Speech Recognition, and Artificial Intelligence are also outlined.

  1. Forecasting Electric Power Generation of Photovoltaic Power System for Energy Network

    NASA Astrophysics Data System (ADS)

    Kudo, Mitsuru; Takeuchi, Akira; Nozaki, Yousuke; Endo, Hisahito; Sumita, Jiro

    Recently, there has been an increase in concern about the global environment. Interest is growing in developing an energy network by which new energy systems such as photovoltaic and fuel cells generate power locally and electric power and heat are controlled with a communications network. We developed the power generation forecast method for photovoltaic power systems in an energy network. The method makes use of weather information and regression analysis. We carried out forecasting power output of the photovoltaic power system installed in Expo 2005, Aichi Japan. As a result of comparing measurements with a prediction values, the average prediction error per day was about 26% of the measured power.

  2. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Throneburg, E. B.; Jones, J. M.

    2006-07-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presentedmore » that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)« less

  3. Segmenting root systems in xray computed tomography images using level sets

    USDA-ARS?s Scientific Manuscript database

    The segmentation of plant roots from soil and other growing mediums in xray computed tomography images is needed to effectively study the shapes of roots without excavation. However, segmentation is a challenging problem in this context because the root and non-root regions share similar features. ...

  4. The Reality, Direction, and Future of Computerized Publications

    ERIC Educational Resources Information Center

    Levenstein, Nicholas

    2012-01-01

    Sharing information in digital form by using a computer is a growing phenomenon. Many universities are making their applications available on computer. More than one hundred and thirty-six universities have developed computerized applications on their own or through a commercial vendor. Universities developed computerized applications in order to…

  5. Challenges in Transcribing Multimodal Data: A Case Study

    ERIC Educational Resources Information Center

    Helm, Francesca; Dooly, Melinda

    2017-01-01

    Computer-mediated communication (CMC) once meant principally text-based communication mediated by computers, but rapid technological advances in recent years have heralded an era of multimodal communication with a growing emphasis on audio and video synchronous interaction. As CMC, in all its variants (text chats, video chats, forums, blogs, SMS,…

  6. Case Study of a Computer Based Examination System

    ERIC Educational Resources Information Center

    Fluck, Andrew; Pullen, Darren; Harper, Colleen

    2009-01-01

    Electronic supported assessment or e-Assessment is a field of growing importance, but it has yet to make a significant impact in the Australian higher education sector (Byrnes & Ellis, 2006). Current computer based assessment models focus on the assessment of knowledge rather than deeper understandings, using multiple choice type questions,…

  7. Exploring Cloud Computing for Distance Learning

    ERIC Educational Resources Information Center

    He, Wu; Cernusca, Dan; Abdous, M'hammed

    2011-01-01

    The use of distance courses in learning is growing exponentially. To better support faculty and students for teaching and learning, distance learning programs need to constantly innovate and optimize their IT infrastructures. The new IT paradigm called "cloud computing" has the potential to transform the way that IT resources are utilized and…

  8. Fighting Technology for Toddlers

    ERIC Educational Resources Information Center

    Miller, Edward

    2005-01-01

    The presence of computers in early childhood classrooms is growing, despite serious doubts about whether young children really need them or benefit from using them. A recent study of child care centers in Texas found that preschool children were using computers in more than 75% of these institutions. According to most child development experts,…

  9. Electronic Advocacy and Social Welfare Policy Education

    ERIC Educational Resources Information Center

    Moon, Sung Seek; DeWeaver, Kevin L.

    2005-01-01

    The rapid increase in the number of low-cost computers, the proliferation of user-friendly software, and the development of electronic networks have created the "informatics era." The Internet is a rapidly growing communication resource that is becoming mainstream in the American society. Computer-based electronic political advocacy by social…

  10. The Classroom, Board Room, Chat Room, and Court Room: School Computers at the Crossroads.

    ERIC Educational Resources Information Center

    Stewart, Michael

    2000-01-01

    In schools' efforts to maximize technology's benefits, ethical considerations have often taken a back seat. Computer misuse is growing exponentially and assuming many forms: unauthorized data access, hacking, piracy, information theft, fraud, virus creation, harassment, defamation, and discrimination. Integrated-learning activities will help…

  11. The Administrative Use of Microcomputers. Technical Report.

    ERIC Educational Resources Information Center

    Alabama Univ., University. Coll. of Education.

    Citing the growing interest in using microcomputers as an aid in educational administration, this report discusses factors that must be considered when purchasing a computer and describes an actual case of computer implementation for administrative purposes. The first steps in the purchasing process are to assess and to prioritize the…

  12. The New Technology in Political Education in West Germany.

    ERIC Educational Resources Information Center

    George, Siegfried

    Debate in West Germany among technicians, economists, politicians, and educators about technological advancement and the use of computers focuses on the need to be informed about the consequences of the technological revolution. Some concerns are that computer use will lead to social isolation, a growing bureaucracy and authoritarian power…

  13. CALL Essentials: Principles and Practice in CALL Classrooms

    ERIC Educational Resources Information Center

    Egbert, Joy

    2005-01-01

    Computers and the Internet offer innovative teachers exciting ways to enhance their pedagogy and capture their students' attention. These technologies have created a growing field of inquiry, computer-assisted language learning (CALL). As new technologies have emerged, teaching professionals have adapted them to support teachers and learners in…

  14. A Computer-Assisted Approach for Conducting Information Technology Applied Instructions

    ERIC Educational Resources Information Center

    Chu, Hui-Chun; Hwang, Gwo-Jen; Tsai, Pei Jin; Yang, Tzu-Chi

    2009-01-01

    The growing popularity of computer and network technologies has attracted researchers to investigate the strategies and the effects of information technology applied instructions. Previous research has not only demonstrated the benefits of applying information technologies to the learning process, but has also revealed the difficulty of applying…

  15. Music Learning in Your School Computer Lab.

    ERIC Educational Resources Information Center

    Reese, Sam

    1998-01-01

    States that a growing number of schools are installing general computer labs equipped to use notation, accompaniment, and sequencing software independent of MIDI keyboards. Discusses (1) how to configure the software without MIDI keyboards or external sound modules, (2) using the actual MIDI software, (3) inexpensive enhancements, and (4) the…

  16. Computer program for updating timber resource statistics by county, with tables for Mississippi

    Treesearch

    Roy C. Beltz; Joe F. Christopher

    1970-01-01

    A computer program is available for updating Forest Survey estimates of timber growth, cut, and inventory volume by species group, for sawtimber and growing stock. Changes in rate of product removal are estimated from changes in State severance tax data. Updated tables are given for Mississippi.

  17. Understanding Initial Undergraduate Expectations and Identity in Computing Studies

    ERIC Educational Resources Information Center

    Kinnunen, Päivi; Butler, Matthew; Morgan, Michael; Nylen, Aletta; Peters, Anne-Kathrin; Sinclair, Jane; Kalvala, Sara; Pesonen, Erkki

    2018-01-01

    There is growing appreciation of the importance of understanding the student perspective in Higher Education (HE) at both institutional and international levels. This is particularly important in Science, Technology, Engineering and Mathematics subjects such as Computer Science (CS) and Engineering in which industry needs are high but so are…

  18. Computer-Assisted Second Language Vocabulary Instruction: A Meta-Analysis

    ERIC Educational Resources Information Center

    Chiu, Yi-Hui

    2013-01-01

    There is growing attention to incorporating computer-mediated instruction for language learning and teaching. Specifically, vocabulary is arguably the foundation of mastering a language, as the mastery of vocabulary is the fundamental step of learning a language. Second language (L2) vocabulary is important in the development of cognitive systems…

  19. The Ames Power Monitoring System

    NASA Technical Reports Server (NTRS)

    Osetinsky, Leonid; Wang, David

    2003-01-01

    The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also provides power engineers and electricians with the information they need to plan modifications in advance and perform day-to-day maintenance of the ARC electric-power distribution system.

  20. Saving Energy and Money: A Lesson in Computer Power Management

    ERIC Educational Resources Information Center

    Lazaros, Edward J.; Hua, David

    2012-01-01

    In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…

  1. Computing the Power-Density Spectrum for an Engineering Model

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1982-01-01

    Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.

  2. Tiny tiny things with an amazing power

    NASA Astrophysics Data System (ADS)

    McCammon, C. A.; Peiffer, S.; Wan, M.

    2017-12-01

    Our world is made of tiny tiny things that come together to make bigger things. Some of these tiny tiny things have an amazing power and can change themselves in air, the same way that old cars grow spots made up of small red bits. But when our world was very young, these tiny tiny things with the amazing power could not change to small red bits because the air was different. How was it different? For one thing it did not have the stuff that animals need to breathe. But things were not boring on our very young world and much was happening that would eventually change the air so that animals could breathe. We studied some of the stuff that changed during the growing up of our world using strong light that makes funny forms. We learned many new things and that changes happen slowly in water but faster when stuff is dry.

  3. Computer program for afterheat temperature distribution for mobile nuclear power plant

    NASA Technical Reports Server (NTRS)

    Parker, W. G.; Vanbibber, L. E.

    1972-01-01

    ESATA computer program was developed to analyze thermal safety aspects of post-impacted mobile nuclear power plants. Program is written in FORTRAN 4 and designed for IBM 7094/7044 direct coupled system.

  4. Quantum Computation

    NASA Astrophysics Data System (ADS)

    Aharonov, Dorit

    In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I l out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication. As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained introduction to the exciting field of quantum computation. The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor's factorization algorithm and Grover's algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers maintain their complexity power even in the presence of noise, inaccuracies and finite precision. This question cannot be separated from that of quantum complexity because any realistic model will inevitably be subjected to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review, I make these connections explicit by discussing the possible implications of quantum computation on fundamental physical questions such as the transition from quantum to classical physics.

  5. Initial Visions of Paradise: Antebellum U.S. Government Documents on the South Pacific

    ERIC Educational Resources Information Center

    Chapman, Bert

    2004-01-01

    During the first half of the 19th century, the United States grew from a nation confined to the Atlantic seaboard to a country on the verge of becoming a global power. One factor prompting this growth was the United States' growing intellectual, economic, and strategic interests in the Pacific Ocean. These growing interests were fueled by the…

  6. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  7. Emerging needs for mobile nuclear powerplants

    NASA Technical Reports Server (NTRS)

    Anderson, J. L.

    1972-01-01

    Incentives for broadening the present role of civilian nuclear power to include mobile nuclear power plants that are compact, lightweight, and safe are examined. Specifically discussed is the growing importance of: (1) a new international cargo transportation capability, and (2) the capability for development of resources in previously remote regions of the earth including the oceans and the Arctic. This report surveys present and potential systems (vehicles, remote stations, and machines) that would both provide these capabilities and require enough power to justify using mobile nuclear reactor power plants.

  8. Rapid Identification of Sequences for Orphan Enzymes to Power Accurate Protein Annotation

    PubMed Central

    Ojha, Sunil; Watson, Douglas S.; Bomar, Martha G.; Galande, Amit K.; Shearer, Alexander G.

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the “back catalog” of enzymology – “orphan enzymes,” those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme “back catalog” is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology’s “back catalog” another powerful tool to drive accurate genome annotation. PMID:24386392

  9. Rapid identification of sequences for orphan enzymes to power accurate protein annotation.

    PubMed

    Ramkissoon, Kevin R; Miller, Jennifer K; Ojha, Sunil; Watson, Douglas S; Bomar, Martha G; Galande, Amit K; Shearer, Alexander G

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.

  10. Report of the Defense Science Board Task Force on Military Applications of New-Generation Computing Technologies.

    DTIC Science & Technology

    1984-12-01

    1980’s we are seeing enhancement of breadth, power, and accessibility of computers in many dimensions: o Pov~erfu1, costly fragile mainframes for...During the 1980’s we are seeing enhancement of breadth, power and accessibility of computers in many dimensions. (1) Powerful, costly, fragile mainframes... X A~ ’ EMORANDlUM FOR THE t-RAIRMAN, DEFENSE<. ’ ’...’"" S!B.FECT: Defense Science Board T is F- Supercomputei Applicai io, Yoi are requested to

  11. Strategic Leadership Challenges with the Joint Information Environment

    DTIC Science & Technology

    2013-03-01

    Approved for Public Release. Distribution is Unlimited. 13. SUPPLEMENTARY NOTES Word Count: 6,852 14. ABSTRACT In the face of growing cyber...USCYBERCOM Classification: Unclassified In the face of growing cyber attacks against Department of Defense (DoD...million computers, and 250,000 mobile devices (i.e. Blackberry ). All of this IT capability represented an investment of $37 Billion in the Fiscal Year

  12. Is systems pharmacology ready to impact upon therapy development? A study on the cholesterol biosynthesis pathway

    PubMed Central

    Benson, Helen E; Sharman, Joanna L; Mpamhanga, Chido P; Parton, Andrew; Southan, Christopher; Harmar, Anthony J; Ghazal, Peter

    2017-01-01

    Background and Purpose An ever‐growing wealth of information on current drugs and their pharmacological effects is available from online databases. As our understanding of systems biology increases, we have the opportunity to predict, model and quantify how drug combinations can be introduced that outperform conventional single‐drug therapies. Here, we explore the feasibility of such systems pharmacology approaches with an analysis of the mevalonate branch of the cholesterol biosynthesis pathway. Experimental Approach Using open online resources, we assembled a computational model of the mevalonate pathway and compiled a set of inhibitors directed against targets in this pathway. We used computational optimization to identify combination and dose options that show not only maximal efficacy of inhibition on the cholesterol producing branch but also minimal impact on the geranylation branch, known to mediate the side effects of pharmaceutical treatment. Key Results We describe serious impediments to systems pharmacology studies arising from limitations in the data, incomplete coverage and inconsistent reporting. By curating a more complete dataset, we demonstrate the utility of computational optimization for identifying multi‐drug treatments with high efficacy and minimal off‐target effects. Conclusion and Implications We suggest solutions that facilitate systems pharmacology studies, based on the introduction of standards for data capture that increase the power of experimental data. We propose a systems pharmacology workflow for the refinement of data and the generation of future therapeutic hypotheses. PMID:28910500

  13. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  14. The impact of technological change on census taking.

    PubMed

    Brackstone, G J

    1984-01-01

    The increasing costs of traditional census collection methods have forced census administrators to look at the possibility of using administrative record systems in order to obtain population data. This article looks at the recent technological developments which have taken place in the last decade, and how they may affect data collection for the 1990 census. Because it is important to allow sufficient developmental and testing time of potential automated methods and technologies, it is not too soon to look at the trends resulting from technological advances and their implications for census data collection. These trends are: 1) the declining ratio of computing costs to manpower costs; 2) the increasing ratio of power and capacity of computers to their physical size; 3) declining data storage costs; 4) the increasing public acceptance of computers; 5) the increasing workforce familiarity with computers; and 6) the growing interactive computing capacity. Traditional use of computers for government data gathering operations were primarily for the processing stage. Now the possibility of applying these trends to census material may influence all aspects of the process; from questionnaire design and production, to data analysis. Examples include: the production of high quality maps for geographic frameworks, optical readers for data entry, the ability to provide users with a final data base, as well as printed output, and quicker dissemination of data results. Although these options exist, just like the use of administrative records for statistical purposes, they must be carefully analysed in context to the purposes for which they were created. The limitations of using administrative records for the and 2) definition, coverage, and quality limitations could bias statistical data derived from them. Perhaps they should be used as potential complementary sources of data, and not as replacements for census data. Influencing the evolution of these administrative records will help increase their chances fo being used for future census information.

  15. Computational Astrophysics Towards Exascale Computing and Big Data

    NASA Astrophysics Data System (ADS)

    Astsatryan, H. V.; Knyazyan, A. V.; Mickaelian, A. M.

    2016-06-01

    Traditionally, Armenia has a leading position both within the computer science and Information Technology and Astronomy and Astrophysics sectors in the South Caucasus region and beyond. For instance recent years Information Technology (IT) became one of the fastest growing industries of the Armenian economy (EIF 2013). The main objective of this article is to highlight the key activities that will spur Armenia to strengthen its computational astrophysics capacity thanks to the analysis made of the current trends of e-Infrastructures worldwide.

  16. Asymmetric Base-Bleed Effect on Aerospike Plume-Induced Base-Heating Environment

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Droege, Alan; DAgostino, Mark; Lee, Young-Ching; Williams, Robert

    2004-01-01

    A computational heat transfer design methodology was developed to study the dual-engine linear aerospike plume-induced base-heating environment during one power-pack out, in ascent flight. It includes a three-dimensional, finite volume, viscous, chemically reacting, and pressure-based computational fluid dynamics formulation, a special base-bleed boundary condition, and a three-dimensional, finite volume, and spectral-line-based weighted-sum-of-gray-gases absorption computational radiation heat transfer formulation. A separate radiation model was used for diagnostic purposes. The computational methodology was systematically benchmarked. In this study, near-base radiative heat fluxes were computed, and they compared well with those measured during static linear aerospike engine tests. The base-heating environment of 18 trajectory points selected from three power-pack out scenarios was computed. The computed asymmetric base-heating physics were analyzed. The power-pack out condition has the most impact on convective base heating when it happens early in flight. The source of its impact comes from the asymmetric and reduced base bleed.

  17. Exploring Large-Scale Cross-Correlation for Teleseismic and Regional Seismic Event Characterization

    NASA Astrophysics Data System (ADS)

    Dodge, Doug; Walter, William; Myers, Steve; Ford, Sean; Harris, Dave; Ruppert, Stan; Buttler, Dave; Hauk, Terri

    2013-04-01

    The decrease in costs of both digital storage space and computation power invites new methods of seismic data processing. At Lawrence Livermore National Laboratory(LLNL) we operate a growing research database of seismic events and waveforms for nuclear explosion monitoring and other applications. Currently the LLNL database contains several million events associated with tens of millions of waveforms at thousands of stations. We are making use of this database to explore the power of seismic waveform correlation to quantify signal similarities, to discover new events not in catalogs, and to more accurately locate events and identify source types. Building on the very efficient correlation methodologies of Harris and Dodge (2011) we computed the waveform correlation for event pairs in the LLNL database in two ways. First we performed entire waveform cross-correlation over seven distinct frequency bands. The correlation coefficient exceeds 0.6 for more than 40 million waveform pairs for several hundred thousand events at more than a thousand stations. These correlations reveal clusters of mining events and aftershock sequences, which can be used to readily identify and locate events. Second we determine relative pick times by correlating signals in time windows for distinct seismic phases. These correlated picks are then used to perform very high accuracy event relocations. We are examining the percentage of events that correlate as a function of magnitude and observing station distance in selected high seismicity regions. Combining these empirical results and those using synthetic data, we are working to quantify relationships between correlation and event pair separation (in epicenter and depth) as well as mechanism differences. Our exploration of these techniques on a large seismic database is in process and we will report on our findings in more detail at the meeting.

  18. Exploring Large-Scale Cross-Correlation for Teleseismic and Regional Seismic Event Characterization

    NASA Astrophysics Data System (ADS)

    Dodge, D.; Walter, W. R.; Myers, S. C.; Ford, S. R.; Harris, D.; Ruppert, S.; Buttler, D.; Hauk, T. F.

    2012-12-01

    The decrease in costs of both digital storage space and computation power invites new methods of seismic data processing. At Lawrence Livermore National Laboratory (LLNL) we operate a growing research database of seismic events and waveforms for nuclear explosion monitoring and other applications. Currently the LLNL database contains several million events associated with tens of millions of waveforms at thousands of stations. We are making use of this database to explore the power of seismic waveform correlation to quantify signal similarities, to discover new events not in catalogs, and to more accurately locate events and identify source types. Building on the very efficient correlation methodologies of Harris and Dodge (2011) we computed the waveform correlation for event pairs in the LLNL database in two ways. First we performed entire waveform cross-correlation over seven distinct frequency bands. The correlation coefficient exceeds 0.6 for more than 40 million waveform pairs for several hundred thousand events at more than a thousand stations. These correlations reveal clusters of mining events and aftershock sequences, which can be used to readily identify and locate events. Second we determine relative pick times by correlating signals in time windows for distinct seismic phases. These correlated picks are then used to perform very high accuracy event relocations. We are examining the percentage of events that correlate as a function of magnitude and observing station distance in selected high seismicity regions. Combining these empirical results and those using synthetic data, we are working to quantify relationships between correlation and event pair separation (in epicenter and depth) as well as mechanism differences. Our exploration of these techniques on a large seismic database is in process and we will report on our findings in more detail at the meeting.

  19. Design of large Francis turbine using optimal methods

    NASA Astrophysics Data System (ADS)

    Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.

    2012-11-01

    Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.

  20. The Enviornmental Impact of Electrical Power Generation: Nuclear and Fossil. A Minicourse for Secondary Schools and Adult Education. Text.

    ERIC Educational Resources Information Center

    McDermott, John J., Ed.

    This course, developed for use in secondary and adult education, is an effort to describe the cost-benefit ratio of the various methods of generation of electrical power in an era when the requirement for additional sources of power is growing at an ever-increasing rate and environmental protection is a major concern. This course was written and…

  1. Cooling of a sunspot

    NASA Technical Reports Server (NTRS)

    Boruta, N.

    1977-01-01

    The question of whether a perturbed photospheric area can grow into a region of reduced temperature resembling a sunspot is investigated by considering whether instabilities exist that can lead to a growing temperature change and corresponding magnetic-field concentration in some region of the photosphere. After showing that Alfven cooling can lead to these instabilities, the effect of a heat sink on the temperature development of a perturbed portion of the photosphere is studied. A simple form of Alfven-wave cooling is postulated, and computations are performed to determine whether growing modes exist for physically relevant boundary conditions. The results indicate that simple inhibition of convection does not give growing modes, but Alfven-wave production can result in cooling that leads to growing field concentration. It is concluded that since growing instabilities can occur with strong enough cooling, it is quite possible that energy loss through Alfven waves gives rise to a self-generating temperature change and sunspot formation.

  2. Using a cloud to replenish parched groundwater modeling efforts.

    PubMed

    Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  3. Using a cloud to replenish parched groundwater modeling efforts

    USGS Publications Warehouse

    Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  4. Pressure fluctuations beneath turbulent spots and instability wave packets in a hypersonic boundary layer.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beresh, Steven Jay; Casper, Katya M.; Schneider, Steven P.

    2010-12-01

    The development of turbulent spots in a hypersonic boundary layer was studied on the nozzle wall of the Boeing/AFOSR Mach-6 Quiet Tunnel. Under quiet flow conditions, the nozzle wall boundary layer remains laminar and grows very thick over the long nozzle length. This allows the development of large turbulent spots that can be readily measured with pressure transducers. Measurements of naturally occurring wave packets and developing turbulent spots were made. The peak frequencies of these natural wave packets were in agreement with second-mode computations. For a controlled study, the breakdown of disturbances created by spark and glow perturbations were studiedmore » at similar freestream conditions. The spark perturbations were the most effective at creating large wave packets that broke down into turbulent spots. The flow disturbances created by the controlled perturbations were analyzed to obtain amplitude criteria for nonlinearity and breakdown as well as the convection velocities of the turbulent spots. Disturbances first grew into linear instability waves and then quickly became nonlinear. Throughout the nonlinear growth of the wave packets, large harmonics are visible in the power spectra. As breakdown begins, the peak amplitudes of the instability waves and harmonics decrease into the rising broad-band frequencies. Instability waves are still visible on either side of the growing turbulent spots during this breakdown process.« less

  5. Biologically inspired collision avoidance system for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Ortiz, Fernando E.; Graham, Brett; Spagnoli, Kyle; Kelmelis, Eric J.

    2009-05-01

    In this project, we collaborate with researchers in the neuroscience department at the University of Delaware to develop an Field Programmable Gate Array (FPGA)-based embedded computer, inspired by the brains of small vertebrates (fish). The mechanisms of object detection and avoidance in fish have been extensively studied by our Delaware collaborators. The midbrain optic tectum is a biological multimodal navigation controller capable of processing input from all senses that convey spatial information, including vision, audition, touch, and lateral-line (water current sensing in fish). Unfortunately, computational complexity makes these models too slow for use in real-time applications. These simulations are run offline on state-of-the-art desktop computers, presenting a gap between the application and the target platform: a low-power embedded device. EM Photonics has expertise in developing of high-performance computers based on commodity platforms such as graphic cards (GPUs) and FPGAs. FPGAs offer (1) high computational power, low power consumption and small footprint (in line with typical autonomous vehicle constraints), and (2) the ability to implement massively-parallel computational architectures, which can be leveraged to closely emulate biological systems. Combining UD's brain modeling algorithms and the power of FPGAs, this computer enables autonomous navigation in complex environments, and further types of onboard neural processing in future applications.

  6. Cellular computational generalized neuron network for frequency situational intelligence in a multi-machine power system.

    PubMed

    Wei, Yawei; Venayagamoorthy, Ganesh Kumar

    2017-09-01

    To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Power combining in an array of microwave power rectifiers

    NASA Technical Reports Server (NTRS)

    Gutmann, R. J.; Borrego, J. M.

    1979-01-01

    This work analyzes the resultant efficiency degradation when identical rectifiers operate at different RF power levels as caused by the power beam taper. Both a closed-form analytical circuit model and a detailed computer-simulation model are used to obtain the output dc load line of the rectifier. The efficiency degradation is nearly identical with series and parallel combining, and the closed-form analytical model provides results which are similar to the detailed computer-simulation model.

  8. Mobile high-performance computing (HPC) for synthetic aperture radar signal processing

    NASA Astrophysics Data System (ADS)

    Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen

    2018-04-01

    The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.

  9. Sustainable Low-Carbon Expansion for the Power Sector of an Emerging Economy: The Case of Kenya.

    PubMed

    Carvallo, Juan-Pablo; Shaw, Brittany J; Avila, Nkiruka I; Kammen, Daniel M

    2017-09-05

    Fast growing and emerging economies face the dual challenge of sustainably expanding and improving their energy supply and reliability while at the same time reducing poverty. Critical to such transformation is to provide affordable and sustainable access to electricity. We use the capacity expansion model SWITCH to explore low carbon development pathways for the Kenyan power sector under a set of plausible scenarios for fast growing economies that include uncertainty in load projections, capital costs, operational performance, and technology and environmental policies. In addition to an aggressive and needed expansion of overall supply, the Kenyan power system presents a unique transition from one basal renewable resource-hydropower-to another based on geothermal and wind power for ∼90% of total capacity. We find geothermal resource adoption is more sensitive to operational degradation than high capital costs, which suggests an emphasis on ongoing maintenance subsidies rather than upfront capital cost subsidies. We also find that a cost-effective and viable suite of solutions includes availability of storage, diesel engines, and transmission expansion to provide flexibility to enable up to 50% of wind power penetration. In an already low-carbon system, typical externality pricing for CO 2 has little to no effect on technology choice. Consequently, a "zero carbon emissions" by 2030 scenario is possible with only moderate levelized cost increases of between $3 and $7/MWh with a number of social and reliability benefits. Our results suggest that fast growing and emerging economies could benefit by incentivizing anticipated strategic transmission expansion. Existing and new diesel and natural gas capacity can play an important role to provide flexibility and meet peak demand in specific hours without a significant increase in carbon emissions, although more research is required for other pollutant's impacts.

  10. Phosphoric acid fuel cell power plant system performance model and computer program

    NASA Technical Reports Server (NTRS)

    Alkasab, K. A.; Lu, C. Y.

    1984-01-01

    A FORTRAN computer program was developed for analyzing the performance of phosphoric acid fuel cell power plant systems. Energy mass and electrochemical analysis in the reformer, the shaft converters, the heat exchangers, and the fuel cell stack were combined to develop a mathematical model for the power plant for both atmospheric and pressurized conditions, and for several commercial fuels.

  11. What Can You Learn from a Cell Phone? Almost Anything!

    ERIC Educational Resources Information Center

    Prensky, Marc

    2005-01-01

    Today's high-end cell phones have the computing power of a mid-1990s personal computer (PC)--while consuming only one one-hundredth of the energy. Even the simplest, voice-only phones have more complex and powerful chips than the 1969 on-board computer that landed a spaceship on the moon. In the United States, it is almost universally acknowledged…

  12. easyGWAS: A Cloud-Based Platform for Comparing the Results of Genome-Wide Association Studies.

    PubMed

    Grimm, Dominik G; Roqueiro, Damian; Salomé, Patrice A; Kleeberger, Stefan; Greshake, Bastian; Zhu, Wangsheng; Liu, Chang; Lippert, Christoph; Stegle, Oliver; Schölkopf, Bernhard; Weigel, Detlef; Borgwardt, Karsten M

    2017-01-01

    The ever-growing availability of high-quality genotypes for a multitude of species has enabled researchers to explore the underlying genetic architecture of complex phenotypes at an unprecedented level of detail using genome-wide association studies (GWAS). The systematic comparison of results obtained from GWAS of different traits opens up new possibilities, including the analysis of pleiotropic effects. Other advantages that result from the integration of multiple GWAS are the ability to replicate GWAS signals and to increase statistical power to detect such signals through meta-analyses. In order to facilitate the simple comparison of GWAS results, we present easyGWAS, a powerful, species-independent online resource for computing, storing, sharing, annotating, and comparing GWAS. The easyGWAS tool supports multiple species, the uploading of private genotype data and summary statistics of existing GWAS, as well as advanced methods for comparing GWAS results across different experiments and data sets in an interactive and user-friendly interface. easyGWAS is also a public data repository for GWAS data and summary statistics and already includes published data and results from several major GWAS. We demonstrate the potential of easyGWAS with a case study of the model organism Arabidopsis thaliana , using flowering and growth-related traits. © 2016 American Society of Plant Biologists. All rights reserved.

  13. PGMS: A Case Study of Collecting PDA-Based Geo-Tagged Malaria-Related Survey Data

    PubMed Central

    Zhou, Ying; Lobo, Neil F.; Wolkon, Adam; Gimnig, John E.; Malishee, Alpha; Stevenson, Jennifer; Sulistyawati; Collins, Frank H.; Madey, Greg

    2014-01-01

    Using mobile devices, such as personal digital assistants (PDAs), smartphones, tablet computers, etc., to electronically collect malaria-related field data is the way for the field questionnaires in the future. This case study seeks to design a generic survey framework PDA-based geo-tagged malaria-related data collection tool (PGMS) that can be used not only for large-scale community-level geo-tagged electronic malaria-related surveys, but also for a wide variety of electronic data collections of other infectious diseases. The framework includes two parts: the database designed for subsequent cross-sectional data analysis and the customized programs for the six study sites (two in Kenya, three in Indonesia, and one in Tanzania). In addition to the framework development, we also present our methods used when configuring and deploying the PDAs to 1) reduce data entry errors, 2) conserve battery power, 3) field install the programs onto dozens of handheld devices, 4) translate electronic questionnaires into local languages, 5) prevent data loss, and 6) transfer data from PDAs to computers for future analysis and storage. Since 2008, PGMS has successfully accomplished quite a few surveys that recorded 10,871 compounds and households, 52,126 persons, and 17,100 bed nets from the six sites. These numbers are still growing. PMID:25048377

  14. Formalizing Neurath's ship: Approximate algorithms for online causal learning.

    PubMed

    Bramley, Neil R; Dayan, Peter; Griffiths, Thomas L; Lagnado, David A

    2017-04-01

    Higher-level cognition depends on the ability to learn models of the world. We can characterize this at the computational level as a structure-learning problem with the goal of best identifying the prevailing causal relationships among a set of relata. However, the computational cost of performing exact Bayesian inference over causal models grows rapidly as the number of relata increases. This implies that the cognitive processes underlying causal learning must be substantially approximate. A powerful class of approximations that focuses on the sequential absorption of successive inputs is captured by the Neurath's ship metaphor in philosophy of science, where theory change is cast as a stochastic and gradual process shaped as much by people's limited willingness to abandon their current theory when considering alternatives as by the ground truth they hope to approach. Inspired by this metaphor and by algorithms for approximating Bayesian inference in machine learning, we propose an algorithmic-level model of causal structure learning under which learners represent only a single global hypothesis that they update locally as they gather evidence. We propose a related scheme for understanding how, under these limitations, learners choose informative interventions that manipulate the causal system to help elucidate its workings. We find support for our approach in the analysis of 3 experiments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Approximations of noise covariance in multi-slice helical CT scans: impact on lung nodule size estimation.

    PubMed

    Zeng, Rongping; Petrick, Nicholas; Gavrielides, Marios A; Myers, Kyle J

    2011-10-07

    Multi-slice computed tomography (MSCT) scanners have become popular volumetric imaging tools. Deterministic and random properties of the resulting CT scans have been studied in the literature. Due to the large number of voxels in the three-dimensional (3D) volumetric dataset, full characterization of the noise covariance in MSCT scans is difficult to tackle. However, as usage of such datasets for quantitative disease diagnosis grows, so does the importance of understanding the noise properties because of their effect on the accuracy of the clinical outcome. The goal of this work is to study noise covariance in the helical MSCT volumetric dataset. We explore possible approximations to the noise covariance matrix with reduced degrees of freedom, including voxel-based variance, one-dimensional (1D) correlation, two-dimensional (2D) in-plane correlation and the noise power spectrum (NPS). We further examine the effect of various noise covariance models on the accuracy of a prewhitening matched filter nodule size estimation strategy. Our simulation results suggest that the 1D longitudinal, 2D in-plane and NPS prewhitening approaches can improve the performance of nodule size estimation algorithms. When taking into account computational costs in determining noise characterizations, the NPS model may be the most efficient approximation to the MSCT noise covariance matrix.

  16. Integrating Sensory/Actuation Systems in Agricultural Vehicles

    PubMed Central

    Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo

    2014-01-01

    In recent years, there have been major advances in the development of new and more powerful perception systems for agriculture, such as computer-vision and global positioning systems. Due to these advances, the automation of agricultural tasks has received an important stimulus, especially in the area of selective weed control where high precision is essential for the proper use of resources and the implementation of more efficient treatments. Such autonomous agricultural systems incorporate and integrate perception systems for acquiring information from the environment, decision-making systems for interpreting and analyzing such information, and actuation systems that are responsible for performing the agricultural operations. These systems consist of different sensors, actuators, and computers that work synchronously in a specific architecture for the intended purpose. The main contribution of this paper is the selection, arrangement, integration, and synchronization of these systems to form a whole autonomous vehicle for agricultural applications. This type of vehicle has attracted growing interest, not only for researchers but also for manufacturers and farmers. The experimental results demonstrate the success and performance of the integrated system in guidance and weed control tasks in a maize field, indicating its utility and efficiency. The whole system is sufficiently flexible for use in other agricultural tasks with little effort and is another important contribution in the field of autonomous agricultural vehicles. PMID:24577525

  17. Integrating sensory/actuation systems in agricultural vehicles.

    PubMed

    Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo

    2014-02-26

    In recent years, there have been major advances in the development of new and more powerful perception systems for agriculture, such as computer-vision and global positioning systems. Due to these advances, the automation of agricultural tasks has received an important stimulus, especially in the area of selective weed control where high precision is essential for the proper use of resources and the implementation of more efficient treatments. Such autonomous agricultural systems incorporate and integrate perception systems for acquiring information from the environment, decision-making systems for interpreting and analyzing such information, and actuation systems that are responsible for performing the agricultural operations. These systems consist of different sensors, actuators, and computers that work synchronously in a specific architecture for the intended purpose. The main contribution of this paper is the selection, arrangement, integration, and synchronization of these systems to form a whole autonomous vehicle for agricultural applications. This type of vehicle has attracted growing interest, not only for researchers but also for manufacturers and farmers. The experimental results demonstrate the success and performance of the integrated system in guidance and weed control tasks in a maize field, indicating its utility and efficiency. The whole system is sufficiently flexible for use in other agricultural tasks with little effort and is another important contribution in the field of autonomous agricultural vehicles.

  18. Securing resource constraints embedded devices using elliptic curve cryptography

    NASA Astrophysics Data System (ADS)

    Tam, Tony; Alfasi, Mohamed; Mozumdar, Mohammad

    2014-06-01

    The use of smart embedded device has been growing rapidly in recent time because of miniaturization of sensors and platforms. Securing data from these embedded devices is now become one of the core challenges both in industry and research community. Being embedded, these devices have tight constraints on resources such as power, computation, memory, etc. Hence it is very difficult to implement traditional Public Key Cryptography (PKC) into these resource constrained embedded devices. Moreover, most of the public key security protocols requires both public and private key to be generated together. In contrast with this, Identity Based Encryption (IBE), a public key cryptography protocol, allows a public key to be generated from an arbitrary string and the corresponding private key to be generated later on demand. While IBE has been actively studied and widely applied in cryptography research, conventional IBE primitives are also computationally demanding and cannot be efficiently implemented on embedded system. Simplified version of the identity based encryption has proven its competence in being robust and also satisfies tight budget of the embedded platform. In this paper, we describe the choice of several parameters for implementing lightweight IBE in resource constrained embedded sensor nodes. Our implementation of IBE is built using elliptic curve cryptography (ECC).

  19. Ambiguity resolution for satellite Doppler positioning systems

    NASA Technical Reports Server (NTRS)

    Argentiero, P. D.; Marini, J. W.

    1977-01-01

    A test for ambiguity resolution was derived which was the most powerful in the sense that it maximized the probability of a correct decision. When systematic error sources were properly included in the least squares reduction process to yield an optimal solution, the test reduced to choosing the solution which provided the smaller valuation of the least squares loss function. When systematic error sources were ignored in the least squares reduction, the most powerful test was a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudo-inverse of a reduced rank square matrix. A formula is presented for computing the power of the most powerful test. A numerical example is included in which the power of the test is computed for a situation which may occur during an actual satellite aided search and rescue mission.

  20. Using NCAR Yellowstone for PhotoVoltaic Power Forecasts with Artificial Neural Networks and an Analog Ensemble

    NASA Astrophysics Data System (ADS)

    Cervone, G.; Clemente-Harding, L.; Alessandrini, S.; Delle Monache, L.

    2016-12-01

    A methodology based on Artificial Neural Networks (ANN) and an Analog Ensemble (AnEn) is presented to generate 72-hour deterministic and probabilistic forecasts of power generated by photovoltaic (PV) power plants using input from a numerical weather prediction model and computed astronomical variables. ANN and AnEn are used individually and in combination to generate forecasts for three solar power plant located in Italy. The computational scalability of the proposed solution is tested using synthetic data simulating 4,450 PV power stations. The NCAR Yellowstone supercomputer is employed to test the parallel implementation of the proposed solution, ranging from 1 node (32 cores) to 4,450 nodes (141,140 cores). Results show that a combined AnEn + ANN solution yields best results, and that the proposed solution is well suited for massive scale computation.

Top