Visual ergonomics and computer work--is it all about computer glasses?
Jonsson, Christina
2012-01-01
The Swedish Provisions on Work with Display Screen Equipment and the EU Directive on the minimum safety and health requirements for work with display screen equipment cover several important visual ergonomics aspects. But a review of cases and questions to the Swedish Work Environment Authority clearly shows that most attention is given to the demands for eyesight tests and special computer glasses. Other important visual ergonomics factors are at risk of being neglected. Today computers are used everywhere, both at work and at home. Computers can be laptops, PDA's, tablet computers, smart phones, etc. The demands on eyesight tests and computer glasses still apply but the visual demands and the visual ergonomics conditions are quite different compared to the use of a stationary computer. Based on this review, we raise the question if the demand on the employer to provide the employees with computer glasses is outdated.
Cork, Randy D.; Detmer, William M.; Friedman, Charles P.
1998-01-01
This paper describes details of four scales of a questionnaire—“Computers in Medical Care”—measuring attributes of computer use, self-reported computer knowledge, computer feature demand, and computer optimism of academic physicians. The reliability (i.e., precision, or degree to which the scale's result is reproducible) and validity (i.e., accuracy, or degree to which the scale actually measures what it is supposed to measure) of each scale were examined by analysis of the responses of 771 full-time academic physicians across four departments at five academic medical centers in the United States. The objectives of this paper were to define the psychometric properties of the scales as the basis for a future demonstration study and, pending the results of further validity studies, to provide the questionnaire and scales to the medical informatics community as a tool for measuring the attitudes of health care providers. Methodology: The dimensionality of each scale and degree of association of each item with the attribute of interest were determined by principal components factor analysis with othogonal varimax rotation. Weakly associated items (factor loading <.40) were deleted. The reliability of each resultant scale was computed using Cronbach's alpha coefficient. Content validity was addressed during scale construction; construct validity was examined through factor analysis and by correlational analyses. Results: Attributes of computer use, computer knowledge, and computer optimism were unidimensional, with the corresponding scales having reliabilities of.79,.91, and.86, respectively. The computer-feature demand attribute differentiated into two dimensions: the first reflecting demand for high-level functionality with reliability of.81 and the second demand for usability with reliability of.69. There were significant positive correlations between computer use, computer knowledge, and computer optimism scale scores and respondents' hands-on computer use, computer training, and self-reported computer sophistication. In addition, items posited on the computer knowledge scale to be more difficult generated significantly lower scores. Conclusion: The four scales of the questionnaire appear to measure with adequate reliability five attributes of academic physicians' attitudes toward computers in medical care: computer use, self-reported computer knowledge, demand for computer functionality, demand for computer usability, and computer optimism. Results of initial validity studies are positive, but further validation of the scales is needed. The URL of a downloadable HTML copy of the questionnaire is provided. PMID:9524349
Provider-Independent Use of the Cloud
NASA Astrophysics Data System (ADS)
Harmer, Terence; Wright, Peter; Cunningham, Christina; Perrott, Ron
Utility computing offers researchers and businesses the potential of significant cost-savings, making it possible for them to match the cost of their computing and storage to their demand for such resources. A utility compute provider enables the purchase of compute infrastructures on-demand; when a user requires computing resources a provider will provision a resource for them and charge them only for their period of use of that resource. There has been a significant growth in the number of cloud computing resource providers and each has a different resource usage model, application process and application programming interface (API)-developing generic multi-resource provider applications is thus difficult and time consuming. We have developed an abstraction layer that provides a single resource usage model, user authentication model and API for compute providers that enables cloud-provider neutral applications to be developed. In this paper we outline the issues in using external resource providers, give examples of using a number of the most popular cloud providers and provide examples of developing provider neutral applications. In addition, we discuss the development of the API to create a generic provisioning model based on a common architecture for cloud computing providers.
Integrating Grid Services into the Cray XT4 Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
NERSC; Cholia, Shreyas; Lin, Hwa-Chun Wendy
2009-05-01
The 38640 core Cray XT4"Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a shared-root environment and idiosyncratic application launching. Inour work, we describe how we resolved these challenges on a running, general-purpose production system to provide on-demand compute, storage, accounting and monitoring services through generic gridmore » interfaces that mask the underlying system-specific details for the end user.« less
Utilization of KSC Present Broadband Communications Data System for Digital Video Services
NASA Technical Reports Server (NTRS)
Andrawis, Alfred S.
2002-01-01
This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.
Utilization of KSC Present Broadband Communications Data System For Digital Video Services
NASA Technical Reports Server (NTRS)
Andrawis, Alfred S.
2001-01-01
This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.
Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation
NASA Technical Reports Server (NTRS)
Stocker, John C.; Golomb, Andrew M.
2011-01-01
Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.
NASA Astrophysics Data System (ADS)
Aneri, Parikh; Sumathy, S.
2017-11-01
Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.
NASA Astrophysics Data System (ADS)
Yang, Wei; Hall, Trevor
2012-12-01
The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.
Implications of Ubiquitous Computing for the Social Studies Curriculum
ERIC Educational Resources Information Center
van Hover, Stephanie D.; Berson, Michael J.; Bolick, Cheryl Mason; Swan, Kathleen Owings
2004-01-01
In March 2002, members of the National Technology Leadership Initiative (NTLI) met in Charlottesville, Virginia to discuss the potential effects of ubiquitous computing on the field of education. Ubiquitous computing, or "on-demand availability of task-necessary computing power," involves providing every student with a handheld computer--a…
Cloud Computing Security Issue: Survey
NASA Astrophysics Data System (ADS)
Kamal, Shailza; Kaur, Rajpreet
2011-12-01
Cloud computing is the growing field in IT industry since 2007 proposed by IBM. Another company like Google, Amazon, and Microsoft provides further products to cloud computing. The cloud computing is the internet based computing that shared recourses, information on demand. It provides the services like SaaS, IaaS and PaaS. The services and recourses are shared by virtualization that run multiple operation applications on cloud computing. This discussion gives the survey on the challenges on security issues during cloud computing and describes some standards and protocols that presents how security can be managed.
ERIC Educational Resources Information Center
Conn, Samuel S.; Reichgelt, Han
2013-01-01
Cloud computing represents an architecture and paradigm of computing designed to deliver infrastructure, platforms, and software as constructible computing resources on demand to networked users. As campuses are challenged to better accommodate academic needs for applications and computing environments, cloud computing can provide an accommodating…
2010-07-01
Cloud computing , an emerging form of computing in which users have access to scalable, on-demand capabilities that are provided through Internet... cloud computing , (2) the information security implications of using cloud computing services in the Federal Government, and (3) federal guidance and...efforts to address information security when using cloud computing . The complete report is titled Information Security: Federal Guidance Needed to
Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction
Agulleiro, Jose-Ignacio; Fernández, José Jesús
2012-01-01
Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768
Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian
2011-08-30
Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.
Distance Learning and Cloud Computing: "Just Another Buzzword or a Major E-Learning Breakthrough?"
ERIC Educational Resources Information Center
Romiszowski, Alexander J.
2012-01-01
"Cloud computing is a model for the enabling of ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and other services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This…
ERIC Educational Resources Information Center
Race, Elizabeth A.; Shanker, Shanti; Wagner, Anthony D.
2009-01-01
Past experience is hypothesized to reduce computational demands in PFC by providing bottom-up predictive information that informs subsequent stimulus-action mapping. The present fMRI study measured cortical activity reductions ("neural priming"/"repetition suppression") during repeated stimulus classification to investigate the mechanisms through…
Travel demand forecasting models: a comparison of EMME/2 and QUR II using a real-world network.
DOT National Transportation Integrated Search
2000-10-01
In order to automate the travel demand forecasting process in urban transportation planning, a number of : commercial computer based travel demand forecasting models have been developed, which have provided : transportation planners with powerful and...
Optical Computers and Space Technology
NASA Technical Reports Server (NTRS)
Abdeldayem, Hossin A.; Frazier, Donald O.; Penn, Benjamin; Paley, Mark S.; Witherow, William K.; Banks, Curtis; Hicks, Rosilen; Shields, Angela
1995-01-01
The rapidly increasing demand for greater speed and efficiency on the information superhighway requires significant improvements over conventional electronic logic circuits. Optical interconnections and optical integrated circuits are strong candidates to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by the conventional electronic logic circuits. The new optical technology has increased the demand for high quality optical materials. NASA's recent involvement in processing optical materials in space has demonstrated that a new and unique class of high quality optical materials are processible in a microgravity environment. Microgravity processing can induce improved orders in these materials and could have a significant impact on the development of optical computers. We will discuss NASA's role in processing these materials and report on some of the associated nonlinear optical properties which are quite useful for optical computers technology.
NASA Astrophysics Data System (ADS)
Yang, Wei; Hall, Trevor J.
2013-12-01
The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.
Race, Elizabeth A; Shanker, Shanti; Wagner, Anthony D
2009-09-01
Past experience is hypothesized to reduce computational demands in PFC by providing bottom-up predictive information that informs subsequent stimulus-action mapping. The present fMRI study measured cortical activity reductions ("neural priming"/"repetition suppression") during repeated stimulus classification to investigate the mechanisms through which learning from the past decreases demands on the prefrontal executive system. Manipulation of learning at three levels of representation-stimulus, decision, and response-revealed dissociable neural priming effects in distinct frontotemporal regions, supporting a multiprocess model of neural priming. Critically, three distinct patterns of neural priming were identified in lateral frontal cortex, indicating that frontal computational demands are reduced by three forms of learning: (a) cortical tuning of stimulus-specific representations, (b) retrieval of learned stimulus-decision mappings, and (c) retrieval of learned stimulus-response mappings. The topographic distribution of these neural priming effects suggests a rostrocaudal organization of executive function in lateral frontal cortex.
Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support
Camargo, João; Rochol, Juergen; Gerla, Mario
2018-01-01
A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends. PMID:29364172
Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.
Rosário, Denis; Schimuneck, Matias; Camargo, João; Nobre, Jéferson; Both, Cristiano; Rochol, Juergen; Gerla, Mario
2018-01-24
A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends.
2011-01-01
Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105
Implementing Computer Integrated Manufacturing Technician Program.
ERIC Educational Resources Information Center
Gibbons, Roger
A computer-integrated manufacturing (CIM) technician program was developed to provide training and technical assistance to meet the needs of business and industry in the face of the demands of high technology. The Computer and Automated Systems Association (CASA) of the Society of Manufacturing Engineers provided the incentive and guidelines…
Education of Engineering Students within a Multimedia/Hypermedia Environment--A Review.
ERIC Educational Resources Information Center
Anderl, R.; Vogel, U. R.
This paper summarizes the activities of the Darmstadt University Department of Computer Integrated Design (Germany) related to: (1) distributed lectures (i.e., lectures distributed online through computer networks), including equipment used and ensuring sound and video quality; (2) lectures on demand, including providing access through the World…
Enabling Grid Computing resources within the KM3NeT computing model
NASA Astrophysics Data System (ADS)
Filippidis, Christos
2016-04-01
KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.
USDA-ARS?s Scientific Manuscript database
Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...
Analysis and design of hospital management information system based on UML
NASA Astrophysics Data System (ADS)
Ma, Lin; Zhao, Huifang; You, Shi Jun; Ge, Wenyong
2018-05-01
With the rapid development of computer technology, computer information management system has been utilized in many industries. Hospital Information System (HIS) is in favor of providing data for directors, lightening the workload for the medical workers, and improving the workers efficiency. According to the HIS demand analysis and system design, this paper focus on utilizing unified modeling language (UML) models to establish the use case diagram, class diagram, sequence chart and collaboration diagram, and satisfying the demands of the daily patient visit, inpatient, drug management and other relevant operations. At last, the paper summarizes the problems of the system and puts forward an outlook of the HIS system.
ERIC Educational Resources Information Center
Piele, Philip K.
This document shows how computer technology can aid educators in meeting demands for improved class scheduling and more efficient use of transportation resources. The first section surveys literature on operational systems that provide individualized scheduling for students, varied class structures, and maximum use of space and staff skills.…
NASA Technical Reports Server (NTRS)
Jacob, Joseph; Katz, Daniel; Prince, Thomas; Berriman, Graham; Good, John; Laity, Anastasia
2006-01-01
The final version (3.0) of the Montage software has been released. To recapitulate from previous NASA Tech Briefs articles about Montage: This software generates custom, science-grade mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. This software can be executed on single-processor computers, multi-processor computers, and such networks of geographically dispersed computers as the National Science Foundation s TeraGrid or NASA s Information Power Grid. The primary advantage of running Montage in a grid environment is that computations can be done on a remote supercomputer for efficiency. Multiple computers at different sites can be used for different parts of a computation a significant advantage in cases of computations for large mosaics that demand more processor time than is available at any one site. Version 3.0 incorporates several improvements over prior versions. The most significant improvement is that this version is accessible to scientists located anywhere, through operational Web services that provide access to data from several large astronomical surveys and construct mosaics on either local workstations or remote computational grids as needed.
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU
Xia, Yong; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.
Xia, Yong; Wang, Kuanquan; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
1998-05-01
Increased demands on the performance and efficiency of mechanical components impose challenges on their engineering design and optimization, especially when new and more demanding applications must be developed in relatively short periods of time while satisfying design objectives, as well as cost and manufacturability. In addition, reliability and durability must be taken into consideration. As a consequence, effective quantitative methodologies, computational and experimental, should be applied in the study and optimization of mechanical components. Computational investigations enable parametric studies and the determination of critical engineering design conditions, while experimental investigations, especially those using optical techniques, provide qualitative and quantitative information on the actual response of the structure of interest to the applied load and boundary conditions. We discuss a hybrid experimental and computational approach for investigation and optimization of mechanical components. The approach is based on analytical, computational, and experimental resolutions methodologies in the form of computational, noninvasive optical techniques, and fringe prediction analysis tools. Practical application of the hybrid approach is illustrated with representative examples that demonstrate the viability of the approach as an effective engineering tool for analysis and optimization.
Cloud Computing with iPlant Atmosphere.
McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos
2013-10-15
Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.
NASA Astrophysics Data System (ADS)
Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.
2015-03-01
We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.
NASA Technical Reports Server (NTRS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Al-Kinani, G.
1983-01-01
The potential United States domestic telecommunications demand for satellite provided customer premises voice, data and video services through the year 2000 were forecast, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by Computer premises services systems, identification of that portion of the satellite market addressabble by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a market distribution model and a network optimization model. Forecasts were developed for; 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
NASA Astrophysics Data System (ADS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Al-Kinani, G.
1983-08-01
The potential United States domestic telecommunications demand for satellite provided customer premises voice, data and video services through the year 2000 were forecast, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by Computer premises services systems, identification of that portion of the satellite market addressabble by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a market distribution model and a network optimization model. Forecasts were developed for; 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
Requirements for Next Generation Comprehensive Analysis of Rotorcraft
NASA Technical Reports Server (NTRS)
Johnson, Wayne; Data, Anubhav
2008-01-01
The unique demands of rotorcraft aeromechanics analysis have led to the development of software tools that are described as comprehensive analyses. The next generation of rotorcraft comprehensive analyses will be driven and enabled by the tremendous capabilities of high performance computing, particularly modular and scaleable software executed on multiple cores. Development of a comprehensive analysis based on high performance computing both demands and permits a new analysis architecture. This paper describes a vision of the requirements for this next generation of comprehensive analyses of rotorcraft. The requirements are described and substantiated for what must be included and justification provided for what should be excluded. With this guide, a path to the next generation code can be found.
Eruptive event generator based on the Gibson-Low magnetic configuration
NASA Astrophysics Data System (ADS)
Borovikov, D.; Sokolov, I. V.; Manchester, W. B.; Jin, M.; Gombosi, T. I.
2017-08-01
Coronal mass ejections (CMEs), a kind of energetic solar eruptions, are an integral subject of space weather research. Numerical magnetohydrodynamic (MHD) modeling, which requires powerful computational resources, is one of the primary means of studying the phenomenon. With increasing accessibility of such resources, grows the demand for user-friendly tools that would facilitate the process of simulating CMEs for scientific and operational purposes. The Eruptive Event Generator based on Gibson-Low flux rope (EEGGL), a new publicly available computational model presented in this paper, is an effort to meet this demand. EEGGL allows one to compute the parameters of a model flux rope driving a CME via an intuitive graphical user interface. We provide a brief overview of the physical principles behind EEGGL and its functionality. Ways toward future improvements of the tool are outlined.
On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers
NASA Astrophysics Data System (ADS)
Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.
2017-10-01
This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.
Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y; Glascoe, L
The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less
Secure Wireless Networking at Simon Fraser University.
ERIC Educational Resources Information Center
Johnson, Worth
2003-01-01
Describes the wireless local area network (WLAN) at Simon Fraser University, British Columbia, Canada. Originally conceived to address computing capacity and reduce university computer space demands, the WLAN has provided a seamless computing environment for students and solved a number of other campus problems as well. (SLD)
An Architecture for Cross-Cloud System Management
NASA Astrophysics Data System (ADS)
Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad
The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.
Earth Science Informatics Comes of Age
NASA Technical Reports Server (NTRS)
Jodha, Siri; Khalsa, S.; Ramachandran, Rahul
2014-01-01
The volume and complexity of Earth science data have steadily increased, placing ever-greater demands on researchers, software developers and data managers tasked with handling such data. Additional demands arise from requirements being levied by funding agencies and governments to better manage, preserve and provide open access to data. Fortunately, over the past 10-15 years significant advances in information technology, such as increased processing power, advanced programming languages, more sophisticated and practical standards, and near-ubiquitous internet access have made the jobs of those acquiring, processing, distributing and archiving data easier. These advances have also led to an increasing number of individuals entering the field of informatics as it applies to Geoscience and Remote Sensing. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of data, information, and knowledge. Informatics also encompasses the use of computers and computational methods to support decisionmaking and other applications for societal benefits.
ATLAS user analysis on private cloud resources at GoeGrid
NASA Astrophysics Data System (ADS)
Glaser, F.; Nadal Serrano, J.; Grabowski, J.; Quadt, A.
2015-12-01
User analysis job demands can exceed available computing resources, especially before major conferences. ATLAS physics results can potentially be slowed down due to the lack of resources. For these reasons, cloud research and development activities are now included in the skeleton of the ATLAS computing model, which has been extended by using resources from commercial and private cloud providers to satisfy the demands. However, most of these activities are focused on Monte-Carlo production jobs, extending the resources at Tier-2. To evaluate the suitability of the cloud-computing model for user analysis jobs, we developed a framework to launch an ATLAS user analysis cluster in a cloud infrastructure on demand and evaluated two solutions. The first solution is entirely integrated in the Grid infrastructure by using the same mechanism, which is already in use at Tier-2: A designated Panda-Queue is monitored and additional worker nodes are launched in a cloud environment and assigned to a corresponding HTCondor queue according to the demand. Thereby, the use of cloud resources is completely transparent to the user. However, using this approach, submitted user analysis jobs can still suffer from a certain delay introduced by waiting time in the queue and the deployed infrastructure lacks customizability. Therefore, our second solution offers the possibility to easily deploy a totally private, customizable analysis cluster on private cloud resources belonging to the university.
NASA Astrophysics Data System (ADS)
Werner, Teresa; Weckenmann, Albert
2010-05-01
Due to increasing requirements on the accuracy and reproducibility of measurement results together with a rapid development of novel technologies for the execution of measurements, there is a high demand for adequately qualified metrologists. Accordingly, a variety of training offers are provided by machine manufacturers, universities and other institutions. Yet, for an interested learner it is very difficult to define an optimal training schedule for his/her individual demands. Therefore, a computer-based assistance tool is developed to support a demand-responsive scheduling of training. Based on the difference between the actual and intended competence profile and under consideration of amending requirements, an optimally customized qualification concept is derived. For this, available training offers are categorized according to different dimensions: regarding contents of the course, but also intended target groups, focus of the imparted competences, implemented methods of learning and teaching, expected constraints for learning and necessary preknowledge. After completing a course, the achieved competences and the transferability of gathered knowledge are evaluated. Based on the results, recommendations for amending measures of learning are provided. Thus, a customized qualification for manufacturing metrology is facilitated, adapted to the specific needs and constraints of each individual learner.
Hemsley, Bronwyn; Rollo, Megan; Georgiou, Andrew; Balandin, Susan; Hill, Sophie
2018-01-01
To integrate the findings of research on electronic personal health records (e-PHRs) for an understanding of their health literacy demands on both patients and providers. We sought peer-reviewed primary research in English addressing the health literacy demands of e-PHRs that are online and allow patients any degree of control or input to the record. A synthesis of three theoretical models was used to frame the analysis of 24 studies. e-PHRs pose a wide range of health literacy demands on both patients and health service providers. Patient participation in e-PHRs relies not only on their level of education and computer literacy, and attitudes to sharing health information, but also upon their executive function, verbal expression, and understanding of spoken and written language. The multiple health literacy demands of e-PHRs must be considered when implementing population-wide initiatives for storing and sharing health information using these systems. The health literacy demands of e-PHRs are high and could potentially exclude many patients unless strategies are adopted to support their use of these systems. Developing strategies for all patients to meet or reduce the high health literacy demands of e-PHRs will be important in population-wide implementation. Copyright © 2017 Elsevier B.V. All rights reserved.
VieSLAF Framework: Enabling Adaptive and Versatile SLA-Management
NASA Astrophysics Data System (ADS)
Brandic, Ivona; Music, Dejan; Leitner, Philipp; Dustdar, Schahram
Novel computing paradigms like Grid and Cloud computing demand guarantees on non-functional requirements such as application execution time or price. Such requirements are usually negotiated following a specific Quality of Service (QoS) model and are expressed using Service Level Agreements (SLAs). Currently available QoS models assume either that service provider and consumer have matching SLA templates and common understanding of the negotiated terms or provide public templates, which can be downloaded and utilized by the end users. On the one hand, matching SLA templates represent an unrealistic assumption in systems where service consumer and provider meet dynamically and on demand. On the other hand, handling of public templates seems to be a rather challenging issue, especially if the templates do not reflect users’ needs. In this paper we present VieSLAF, a novel framework for the specification and management of SLA mappings. Using VieSLAF users may specify, manage, and apply SLA mapping bridging the gap between non-matching SLA templates. Moreover, based on the predefined learning functions and considering accumulated SLA mappings, domain specific public SLA templates can be derived reflecting users’ needs.
Dinh, Thanh; Kim, Younghan; Lee, Hyukjoon
2017-03-01
This paper presents a location-based interactive model of Internet of Things (IoT) and cloud integration (IoT-cloud) for mobile cloud computing applications, in comparison with the periodic sensing model. In the latter, sensing collections are performed without awareness of sensing demands. Sensors are required to report their sensing data periodically regardless of whether or not there are demands for their sensing services. This leads to unnecessary energy loss due to redundant transmission. In the proposed model, IoT-cloud provides sensing services on demand based on interest and location of mobile users. By taking advantages of the cloud as a coordinator, sensing scheduling of sensors is controlled by the cloud, which knows when and where mobile users request for sensing services. Therefore, when there is no demand, sensors are put into an inactive mode to save energy. Through extensive analysis and experimental results, we show that the location-based model achieves a significant improvement in terms of network lifetime compared to the periodic model.
Dinh, Thanh; Kim, Younghan; Lee, Hyukjoon
2017-01-01
This paper presents a location-based interactive model of Internet of Things (IoT) and cloud integration (IoT-cloud) for mobile cloud computing applications, in comparison with the periodic sensing model. In the latter, sensing collections are performed without awareness of sensing demands. Sensors are required to report their sensing data periodically regardless of whether or not there are demands for their sensing services. This leads to unnecessary energy loss due to redundant transmission. In the proposed model, IoT-cloud provides sensing services on demand based on interest and location of mobile users. By taking advantages of the cloud as a coordinator, sensing scheduling of sensors is controlled by the cloud, which knows when and where mobile users request for sensing services. Therefore, when there is no demand, sensors are put into an inactive mode to save energy. Through extensive analysis and experimental results, we show that the location-based model achieves a significant improvement in terms of network lifetime compared to the periodic model. PMID:28257067
NASA Astrophysics Data System (ADS)
Zatarain Salazar, Jazmin; Reed, Patrick M.; Quinn, Julianne D.; Giuliani, Matteo; Castelletti, Andrea
2017-11-01
Reservoir operations are central to our ability to manage river basin systems serving conflicting multi-sectoral demands under increasingly uncertain futures. These challenges motivate the need for new solution strategies capable of effectively and efficiently discovering the multi-sectoral tradeoffs that are inherent to alternative reservoir operation policies. Evolutionary many-objective direct policy search (EMODPS) is gaining importance in this context due to its capability of addressing multiple objectives and its flexibility in incorporating multiple sources of uncertainties. This simulation-optimization framework has high potential for addressing the complexities of water resources management, and it can benefit from current advances in parallel computing and meta-heuristics. This study contributes a diagnostic assessment of state-of-the-art parallel strategies for the auto-adaptive Borg Multi Objective Evolutionary Algorithm (MOEA) to support EMODPS. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple sectoral demands from hydropower production, urban water supply, recreation and environmental flows need to be balanced. Using EMODPS with different parallel configurations of the Borg MOEA, we optimize operating policies over different size ensembles of synthetic streamflows and evaporation rates. As we increase the ensemble size, we increase the statistical fidelity of our objective function evaluations at the cost of higher computational demands. This study demonstrates how to overcome the mathematical and computational barriers associated with capturing uncertainties in stochastic multiobjective reservoir control optimization, where parallel algorithmic search serves to reduce the wall-clock time in discovering high quality representations of key operational tradeoffs. Our results show that emerging self-adaptive parallelization schemes exploiting cooperative search populations are crucial. Such strategies provide a promising new set of tools for effectively balancing exploration, uncertainty, and computational demands when using EMODPS.
Infinitely dilute partial molar properties of proteins from computer simulation.
Ploetz, Elizabeth A; Smith, Paul E
2014-11-13
A detailed understanding of temperature and pressure effects on an infinitely dilute protein's conformational equilibrium requires knowledge of the corresponding infinitely dilute partial molar properties. Established molecular dynamics methodologies generally have not provided a way to calculate these properties without either a loss of thermodynamic rigor, the introduction of nonunique parameters, or a loss of information about which solute conformations specifically contributed to the output values. Here we implement a simple method that is thermodynamically rigorous and possesses none of the above disadvantages, and we report on the method's feasibility and computational demands. We calculate infinitely dilute partial molar properties for two proteins and attempt to distinguish the thermodynamic differences between a native and a denatured conformation of a designed miniprotein. We conclude that simple ensemble average properties can be calculated with very reasonable amounts of computational power. In contrast, properties corresponding to fluctuating quantities are computationally demanding to calculate precisely, although they can be obtained more easily by following the temperature and/or pressure dependence of the corresponding ensemble averages.
,
1999-01-01
Currently, the U.S. Geological Survey (USGS) uses conventional lithographic printing techniques to produce paper copies of most of its mapping products. This practice is not economical for those products that are in low demand. With the advent of newer technologies, high-speed, large-format printers have been coupled with innovative computer software to turn digital map data into a printed map. It is now possible to store and retrieve data from vast geospatial data bases and print a map on an as-needed basis; that is, print on demand, thereby eliminating the need to warehouse an inventory of paper maps for which there is low demand. Using print-on-demand technology, the USGS is implementing map-on-demand (MOD) printing for certain infrequently requested maps. By providing MOD, the USGS can offer an alternative to traditional, large-volume printing and can improve its responsiveness to customers by giving them greater access to USGS scientific data in a format that otherwise might not be available.
Cloud computing approaches to accelerate drug discovery value chain.
Garg, Vibhav; Arora, Suchir; Gupta, Chitra
2011-12-01
Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.
Efficient Computation Of Manipulator Inertia Matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1991-01-01
Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.
NASA Astrophysics Data System (ADS)
Shamugam, Veeramani; Murray, I.; Leong, J. A.; Sidhu, Amandeep S.
2016-03-01
Cloud computing provides services on demand instantly, such as access to network infrastructure consisting of computing hardware, operating systems, network storage, database and applications. Network usage and demands are growing at a very fast rate and to meet the current requirements, there is a need for automatic infrastructure scaling. Traditional networks are difficult to automate because of the distributed nature of their decision making process for switching or routing which are collocated on the same device. Managing complex environments using traditional networks is time-consuming and expensive, especially in the case of generating virtual machines, migration and network configuration. To mitigate the challenges, network operations require efficient, flexible, agile and scalable software defined networks (SDN). This paper discuss various issues in SDN and suggests how to mitigate the network management related issues. A private cloud prototype test bed was setup to implement the SDN on the OpenStack platform to test and evaluate the various network performances provided by the various configurations.
Back to the Basics: Cooling with Ice.
ERIC Educational Resources Information Center
Estes, R. C.
1979-01-01
A new high school shifts an electrical demand charge load by using an icemaker during nonoperating hours to provide chilled water for producing cool air. A review resulted in a computer being placed in the design to control the electrical demand charge load in addition to spreading the load. (Author/MLF)
Distributed computing feasibility in a non-dedicated homogeneous distributed system
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Sun, Xian-He
1993-01-01
The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.
Sputnik: ad hoc distributed computation.
Völkel, Gunnar; Lausser, Ludwig; Schmid, Florian; Kraus, Johann M; Kestler, Hans A
2015-04-15
In bioinformatic applications, computationally demanding algorithms are often parallelized to speed up computation. Nevertheless, setting up computational environments for distributed computation is often tedious. Aim of this project were the lightweight ad hoc set up and fault-tolerant computation requiring only a Java runtime, no administrator rights, while utilizing all CPU cores most effectively. The Sputnik framework provides ad hoc distributed computation on the Java Virtual Machine which uses all supplied CPU cores fully. It provides a graphical user interface for deployment setup and a web user interface displaying the current status of current computation jobs. Neither a permanent setup nor administrator privileges are required. We demonstrate the utility of our approach on feature selection of microarray data. The Sputnik framework is available on Github http://github.com/sysbio-bioinf/sputnik under the Eclipse Public License. hkestler@fli-leibniz.de or hans.kestler@uni-ulm.de Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI
Donato, David I.
2017-01-01
In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.
Water demand forecasting: review of soft computing methods.
Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R
2017-07-01
Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.
The engineering design integration (EDIN) system. [digital computer program complex
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.
1974-01-01
A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.
A framework for analyzing the cognitive complexity of computer-assisted clinical ordering.
Horsky, Jan; Kaufman, David R; Oppenheim, Michael I; Patel, Vimla L
2003-01-01
Computer-assisted provider order entry is a technology that is designed to expedite medical ordering and to reduce the frequency of preventable errors. This paper presents a multifaceted cognitive methodology for the characterization of cognitive demands of a medical information system. Our investigation was informed by the distributed resources (DR) model, a novel approach designed to describe the dimensions of user interfaces that introduce unnecessary cognitive complexity. This method evaluates the relative distribution of external (system) and internal (user) representations embodied in system interaction. We conducted an expert walkthrough evaluation of a commercial order entry system, followed by a simulated clinical ordering task performed by seven clinicians. The DR model was employed to explain variation in user performance and to characterize the relationship of resource distribution and ordering errors. The analysis revealed that the configuration of resources in this ordering application placed unnecessarily heavy cognitive demands on the user, especially on those who lacked a robust conceptual model of the system. The resources model also provided some insight into clinicians' interactive strategies and patterns of associated errors. Implications for user training and interface design based on the principles of human-computer interaction in the medical domain are discussed.
State of the Art of Network Security Perspectives in Cloud Computing
NASA Astrophysics Data System (ADS)
Oh, Tae Hwan; Lim, Shinyoung; Choi, Young B.; Park, Kwang-Roh; Lee, Heejo; Choi, Hyunsang
Cloud computing is now regarded as one of social phenomenon that satisfy customers' needs. It is possible that the customers' needs and the primary principle of economy - gain maximum benefits from minimum investment - reflects realization of cloud computing. We are living in the connected society with flood of information and without connected computers to the Internet, our activities and work of daily living will be impossible. Cloud computing is able to provide customers with custom-tailored features of application software and user's environment based on the customer's needs by adopting on-demand outsourcing of computing resources through the Internet. It also provides cloud computing users with high-end computing power and expensive application software package, and accordingly the users will access their data and the application software where they are located at the remote system. As the cloud computing system is connected to the Internet, network security issues of cloud computing are considered as mandatory prior to real world service. In this paper, survey and issues on the network security in cloud computing are discussed from the perspective of real world service environments.
Community-driven computational biology with Debian Linux.
Möller, Steffen; Krabbenhöft, Hajo Nils; Tille, Andreas; Paleino, David; Williams, Alan; Wolstencroft, Katy; Goble, Carole; Holland, Richard; Belhachemi, Dominique; Plessy, Charles
2010-12-21
The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers.
The Use of Board Games in Child Psychotherapy
ERIC Educational Resources Information Center
Oren, Ayala
2008-01-01
Playing checkers, football or more recently, computer games, is an important part of the latency child's culture. The ability to play games demands a level of emotional development similar to that needed to cope with the emotional/developmental demands characteristic of latency. A game shared by the therapist and child provides a picture of the…
NASA Astrophysics Data System (ADS)
Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.
2011-12-01
WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.
NASA Technical Reports Server (NTRS)
Darzi, Michael; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor)
1992-01-01
Methods for detecting and screening cloud contamination from satellite derived visible and infrared data are reviewed in this document. The methods are applicable to past, present, and future polar orbiting satellite radiometers. Such instruments include the Coastal Zone Color Scanner (CZCS), operational from 1978 through 1986; the Advanced Very High Resolution Radiometer (AVHRR); the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), scheduled for launch in August 1993; and the Moderate Resolution Imaging Spectrometer (IMODIS). Constant threshold methods are the least demanding computationally, and often provide adequate results. An improvement to these methods are the least demanding computationally, and often provide adequate results. An improvement to these methods is to determine the thresholds dynamically by adjusting them according to the areal and temporal distributions of the surrounding pixels. Spatial coherence methods set thresholds based on the expected spatial variability of the data. Other statistically derived methods and various combinations of basic methods are also reviewed. The complexity of the methods is ultimately limited by the computing resources. Finally, some criteria for evaluating cloud screening methods are discussed.
Integrating Embedded Computing Systems into High School and Early Undergraduate Education
ERIC Educational Resources Information Center
Benson, B.; Arfaee, A.; Choon Kim; Kastner, R.; Gupta, R. K.
2011-01-01
Early exposure to embedded computing systems is crucial for students to be prepared for the embedded computing demands of today's world. However, exposure to systems knowledge often comes too late in the curriculum to stimulate students' interests and to provide a meaningful difference in how they direct their choice of electives for future…
Cloud computing applications for biomedical science: A perspective.
Navale, Vivek; Bourne, Philip E
2018-06-01
Biomedical research has become a digital data-intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research.
Cloud computing applications for biomedical science: A perspective
2018-01-01
Biomedical research has become a digital data–intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research. PMID:29902176
ERIC Educational Resources Information Center
Black, Claudia
Libraries are becoming information access points, not just book repositories. With greater distribution of printed materials, increased use of optical disks and other compact storage techniques, the emergence of publication on demand, and the proliferation of electronic databases, libraries without large collections will be able to provide prompt…
Demand driven decision support for efficient water resources allocation in irrigated agriculture
NASA Astrophysics Data System (ADS)
Schuetze, Niels; Grießbach, Ulrike Ulrike; Röhm, Patric; Stange, Peter; Wagner, Michael; Seidel, Sabine; Werisch, Stefan; Barfus, Klemens
2014-05-01
Due to climate change, extreme weather conditions, such as longer dry spells in the summer months, may have an increasing impact on the agriculture in Saxony (Eastern Germany). For this reason, and, additionally, declining amounts of rainfall during the growing season the use of irrigation will be more important in future in Eastern Germany. To cope with this higher demand of water, a new decision support framework is developed which focuses on an integrated management of both irrigation water supply and demand. For modeling the regional water demand, local (and site-specific) water demand functions are used which are derived from the optimized agronomic response at farms scale. To account for climate variability the agronomic response is represented by stochastic crop water production functions (SCWPF) which provide the estimated yield subject to the minimum amount of irrigation water. These functions take into account the different soil types, crops and stochastically generated climate scenarios. By applying mathematical interpolation and optimization techniques, the SCWPF's are used to compute the water demand considering different constraints, for instance variable and fix costs or the producer price. This generic approach enables the computation for both multiple crops at farm scale as well as of the aggregated response to water pricing at a regional scale for full and deficit irrigation systems. Within the SAPHIR (SAxonian Platform for High Performance Irrigation) project a prototype of a decision support system is developed which helps to evaluate combined water supply and demand management policies for an effective and efficient utilization of water in order to meet future demands. The prototype is implemented as a web-based decision support system and it is based on a service-oriented geo-database architecture.
The Need for Optical Means as an Alternative for Electronic Computing
NASA Technical Reports Server (NTRS)
Adbeldayem, Hossin; Frazier, Donald; Witherow, William; Paley, Steve; Penn, Benjamin; Bank, Curtis; Whitaker, Ann F. (Technical Monitor)
2001-01-01
An increasing demand for faster computers is rapidly growing to encounter the fast growing rate of Internet, space communication, and robotic industry. Unfortunately, the Very Large Scale Integration technology is approaching its fundamental limits beyond which the device will be unreliable. Optical interconnections and optical integrated circuits are strongly believed to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by conventional electronics. This paper demonstrates two ultra-fast, all-optical logic gates and a high-density storage medium, which are essential components in building the future optical computer.
DeepX: Deep Learning Accelerator for Restricted Boltzmann Machine Artificial Neural Networks.
Kim, Lok-Won
2018-05-01
Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).
Infinitely Dilute Partial Molar Properties of Proteins from Computer Simulation
2015-01-01
A detailed understanding of temperature and pressure effects on an infinitely dilute protein’s conformational equilibrium requires knowledge of the corresponding infinitely dilute partial molar properties. Established molecular dynamics methodologies generally have not provided a way to calculate these properties without either a loss of thermodynamic rigor, the introduction of nonunique parameters, or a loss of information about which solute conformations specifically contributed to the output values. Here we implement a simple method that is thermodynamically rigorous and possesses none of the above disadvantages, and we report on the method’s feasibility and computational demands. We calculate infinitely dilute partial molar properties for two proteins and attempt to distinguish the thermodynamic differences between a native and a denatured conformation of a designed miniprotein. We conclude that simple ensemble average properties can be calculated with very reasonable amounts of computational power. In contrast, properties corresponding to fluctuating quantities are computationally demanding to calculate precisely, although they can be obtained more easily by following the temperature and/or pressure dependence of the corresponding ensemble averages. PMID:25325571
ERIC Educational Resources Information Center
Zhang, Chi; Reichgelt, Han; Rutherfoord, Rebecca H.; Wang, Andy Ju An
2014-01-01
Health Information Technology (HIT) professionals are in increasing demand as healthcare providers need help in the adoption and meaningful use of Electronic Health Record (EHR) systems while the HIT industry needs workforce skilled in HIT and EHR development. To respond to this increasing demand, the School of Computing and Software Engineering…
Enabling BOINC in infrastructure as a service cloud system
NASA Astrophysics Data System (ADS)
Montes, Diego; Añel, Juan A.; Pena, Tomás F.; Uhe, Peter; Wallom, David C. H.
2017-02-01
Volunteer or crowd computing is becoming increasingly popular for solving complex research problems from an increasingly diverse range of areas. The majority of these have been built using the Berkeley Open Infrastructure for Network Computing (BOINC) platform, which provides a range of different services to manage all computation aspects of a project. The BOINC system is ideal in those cases where not only does the research community involved need low-cost access to massive computing resources but also where there is a significant public interest in the research being done.We discuss the way in which cloud services can help BOINC-based projects to deliver results in a fast, on demand manner. This is difficult to achieve using volunteers, and at the same time, using scalable cloud resources for short on demand projects can optimize the use of the available resources. We show how this design can be used as an efficient distributed computing platform within the cloud, and outline new approaches that could open up new possibilities in this field, using Climateprediction.net (http://www.climateprediction.net/) as a case study.
ERIC Educational Resources Information Center
Cano, Diana Wright
2017-01-01
State Education Agencies (SEAs) face challenges to the implementation of computer-based accountability assessments. The change in the accountability assessments from paper-based to computer-based demands action from the states to enable schools and districts to build their technical capacity, train the staff, provide practice opportunities to the…
On-demand Simulation of Atmospheric Transport Processes on the AlpEnDAC Cloud
NASA Astrophysics Data System (ADS)
Hachinger, S.; Harsch, C.; Meyer-Arnek, J.; Frank, A.; Heller, H.; Giemsa, E.
2016-12-01
The "Alpine Environmental Data Analysis Centre" (AlpEnDAC) develops a data-analysis platform for high-altitude research facilities within the "Virtual Alpine Observatory" project (VAO). This platform, with its web portal, will support use cases going much beyond data management: On user request, the data are augmented with "on-demand" simulation results, such as air-parcel trajectories for tracing down the source of pollutants when they appear in high concentration. The respective back-end mechanism uses the Compute Cloud of the Leibniz Supercomputing Centre (LRZ) to transparently calculate results requested by the user, as far as they have not yet been stored in AlpEnDAC. The queuing-system operation model common in supercomputing is replaced by a model in which Virtual Machines (VMs) on the cloud are automatically created/destroyed, providing the necessary computing power immediately on demand. From a security point of view, this allows to perform simulations in a sandbox defined by the VM configuration, without direct access to a computing cluster. Within few minutes, the user receives conveniently visualized results. The AlpEnDAC infrastructure is distributed among two participating institutes [front-end at German Aerospace Centre (DLR), simulation back-end at LRZ], requiring an efficient mechanism for synchronization of measured and augmented data. We discuss our iRODS-based solution for these data-management tasks as well as the general AlpEnDAC framework. Our cloud-based offerings aim at making scientific computing for our users much more convenient and flexible than it has been, and to allow scientists without a broad background in scientific computing to benefit from complex numerical simulations.
Nicolakakis, Nektaria; Stock, Susan R; Abrahamowicz, Michal; Kline, Rex; Messing, Karen
2017-11-01
Computer work has been identified as a risk factor for upper extremity musculoskeletal problems (UEMSP). But few studies have investigated how psychosocial and organizational work factors affect this relation. Nor have gender differences in the relation between UEMSP and these work factors been studied. We sought to estimate: (1) the association between UEMSP and a range of physical, psychosocial and organizational work exposures, including the duration of computer work, and (2) the moderating effect of psychosocial work exposures on the relation between computer work and UEMSP. Using 2007-2008 Québec survey data on 2478 workers, we carried out gender-stratified multivariable logistic regression modeling and two-way interaction analyses. In both genders, odds of UEMSP were higher with exposure to high physical work demands and emotionally demanding work. Additionally among women, UEMSP were associated with duration of occupational computer exposure, sexual harassment, tense situations when dealing with clients, high quantitative demands and lack of prospects for promotion, and among men, with low coworker support, episodes of unemployment, low job security and contradictory work demands. Among women, the effect of computer work on UEMSP was considerably increased in the presence of emotionally demanding work, and may also be moderated by low recognition at work, contradictory work demands, and low supervisor support. These results suggest that the relations between UEMSP and computer work are moderated by psychosocial work exposures and that the relations between working conditions and UEMSP are somewhat different for each gender, highlighting the complexity of these relations and the importance of considering gender.
Pc-Based Floating Point Imaging Workstation
NASA Astrophysics Data System (ADS)
Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin
1989-07-01
The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.
Cyber-workstation for computational neuroscience.
Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C
2010-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.
Cyber-Workstation for Computational Neuroscience
DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.
2009-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436
A highly efficient multi-core algorithm for clustering extremely large datasets
2010-01-01
Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922
Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond
2015-01-01
The use of cloud computing resources continues to grow within the public and private sector components of the weather enterprise as users become more familiar with cloud-computing concepts, and competition among service providers continues to reduce costs and other barriers to entry. Cloud resources can also provide capabilities similar to high-performance computing environments, supporting multi-node systems required for near real-time, regional weather predictions. Referred to as "Infrastructure as a Service", or IaaS, the use of cloud-based computing hardware in an on-demand payment system allows for rapid deployment of a modeling system in environments lacking access to a large, supercomputing infrastructure. Use of IaaS capabilities to support regional weather prediction may be of particular interest to developing countries that have not yet established large supercomputing resources, but would otherwise benefit from a regional weather forecasting capability. Recently, collaborators from NASA Marshall Space Flight Center and Ames Research Center have developed a scripted, on-demand capability for launching the NOAA/NWS Science and Training Resource Center (STRC) Environmental Modeling System (EMS), which includes pre-compiled binaries of the latest version of the Weather Research and Forecasting (WRF) model. The WRF-EMS provides scripting for downloading appropriate initial and boundary conditions from global models, along with higher-resolution vegetation, land surface, and sea surface temperature data sets provided by the NASA Short-term Prediction Research and Transition (SPoRT) Center. This presentation will provide an overview of the modeling system capabilities and benchmarks performed on the Amazon Elastic Compute Cloud (EC2) environment. In addition, the presentation will discuss future opportunities to deploy the system in support of weather prediction in developing countries supported by NASA's SERVIR Project, which provides capacity building activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.
The Ames Power Monitoring System
NASA Technical Reports Server (NTRS)
Osetinsky, Leonid; Wang, David
2003-01-01
The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also provides power engineers and electricians with the information they need to plan modifications in advance and perform day-to-day maintenance of the ARC electric-power distribution system.
Dilsizian, Steven E; Siegel, Eliot L
2014-01-01
Although advances in information technology in the past decade have come in quantum leaps in nearly every aspect of our lives, they seem to be coming at a slower pace in the field of medicine. However, the implementation of electronic health records (EHR) in hospitals is increasing rapidly, accelerated by the meaningful use initiatives associated with the Center for Medicare & Medicaid Services EHR Incentive Programs. The transition to electronic medical records and availability of patient data has been associated with increases in the volume and complexity of patient information, as well as an increase in medical alerts, with resulting "alert fatigue" and increased expectations for rapid and accurate diagnosis and treatment. Unfortunately, these increased demands on health care providers create greater risk for diagnostic and therapeutic errors. In the near future, artificial intelligence (AI)/machine learning will likely assist physicians with differential diagnosis of disease, treatment options suggestions, and recommendations, and, in the case of medical imaging, with cues in image interpretation. Mining and advanced analysis of "big data" in health care provide the potential not only to perform "in silico" research but also to provide "real time" diagnostic and (potentially) therapeutic recommendations based on empirical data. "On demand" access to high-performance computing and large health care databases will support and sustain our ability to achieve personalized medicine. The IBM Jeopardy! Challenge, which pitted the best all-time human players against the Watson computer, captured the imagination of millions of people across the world and demonstrated the potential to apply AI approaches to a wide variety of subject matter, including medicine. The combination of AI, big data, and massively parallel computing offers the potential to create a revolutionary way of practicing evidence-based, personalized medicine.
Houston Area Survey of Employment Trends for College Graduates.
ERIC Educational Resources Information Center
Somers, Coralie; Small, David
The actual and projected level of demand in the employment of college graduates in the Houston, Texas, area was surveyed. Responses from 74 employers provided information on methods for recruiting college graduates and hiring levels for 13 occupational groups, including advertising, architecture, banking, computer software, construction,…
Using Online Delivery for Workplace Training in Healthcare
ERIC Educational Resources Information Center
Bryce, Elizabeth; Choi, Peter; Landstrom, Margaret; LoChang, Justin
2008-01-01
The potential impact of on-line learning in health care is significant. By providing access to educational material from an internet-connected computer anytime and anywhere, healthcare workers (HCWs), whose workload demands are often changing and somewhat unpredictable, have increased ability to self-educate. For example, the growing recognition…
[A study on dental manpower distribution in Shanghai Pudong new district].
Gu, Qin; Feng, Xi-ping
2006-02-01
A study of dental manpower distribution was made in Shanghai Pudong new district in order to analyze the needs and demands for dental services in Shanghai Pudong new district, to forecast the developmental trends of dental demand in the future and to provide basis for regional programs of dental manpower in the urban areas of China. An analysis was made in 601 subjects taken from all age groups in Shanghai Pudong new district by stratified and cluster random sampling and in 83 medical institutions of stomatology in Shanghai Pudong new district by mass examination. The amount of dental manpower need and demand was computed and forecasted by means of health care need and demand and proportional analogy methods. The total amounts needed were 755-834 dentists. The total amounts demanded were 285-314 dentists. It was forecasted that the figures would be 392-1041 in the year of 2010. The prevalence of oral disease was 90.18%, but only 37.66% of subjects visited dentist in a year. The ratio of dentists to the population was 1:9375. The unbalance between demand for and supply of dental manpower was mainly due to negative awareness of people, the irrationalness of demand levels, problems from service provider and the irrationalness of dental manpower levels.
Model documentation report: Commercial Sector Demand Module of the National Energy Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-01-01
This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components. The NEMS Commercial Sector Demand Module is a simulation tool based upon economic and engineering relationships that models commercial sector energy demands at the nine Census Division level of detail for eleven distinct categories of commercial buildings. Commercial equipment selections are performed for the major fuels of electricity, natural gas,more » and distillate fuel, for the major services of space heating, space cooling, water heating, ventilation, cooking, refrigeration, and lighting. The algorithm also models demand for the minor fuels of residual oil, liquefied petroleum gas, steam coal, motor gasoline, and kerosene, the renewable fuel sources of wood and municipal solid waste, and the minor services of office equipment. Section 2 of this report discusses the purpose of the model, detailing its objectives, primary input and output quantities, and the relationship of the Commercial Module to the other modules of the NEMS system. Section 3 of the report describes the rationale behind the model design, providing insights into further assumptions utilized in the model development process to this point. Section 3 also reviews alternative commercial sector modeling methodologies drawn from existing literature, providing a comparison to the chosen approach. Section 4 details the model structure, using graphics and text to illustrate model flows and key computations.« less
Cloud computing for energy management in smart grid - an application survey
NASA Astrophysics Data System (ADS)
Naveen, P.; Kiing Ing, Wong; Kobina Danquah, Michael; Sidhu, Amandeep S.; Abu-Siada, Ahmed
2016-03-01
The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid.
Community-driven computational biology with Debian Linux
2010-01-01
Background The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. Results The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Conclusions Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers. PMID:21210984
AceCloud: Molecular Dynamics Simulations in the Cloud.
Harvey, M J; De Fabritiis, G
2015-05-26
We present AceCloud, an on-demand service for molecular dynamics simulations. AceCloud is designed to facilitate the secure execution of large ensembles of simulations on an external cloud computing service (currently Amazon Web Services). The AceCloud client, integrated into the ACEMD molecular dynamics package, provides an easy-to-use interface that abstracts all aspects of interaction with the cloud services. This gives the user the experience that all simulations are running on their local machine, minimizing the learning curve typically associated with the transition to using high performance computing services.
Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing
NASA Astrophysics Data System (ADS)
Klems, Markus; Nimis, Jens; Tai, Stefan
On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.
Streaming support for data intensive cloud-based sequence analysis.
Issa, Shadi A; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed
2013-01-01
Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of "resources-on-demand" and "pay-as-you-go", scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.
Computer generated hologram from point cloud using graphics processor.
Chen, Rick H-Y; Wilkinson, Timothy D
2009-12-20
Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique.
Introduction to the Space Physics Analysis Network (SPAN)
NASA Technical Reports Server (NTRS)
Green, J. L. (Editor); Peters, D. J. (Editor)
1985-01-01
The Space Physics Analysis Network or SPAN is emerging as a viable method for solving an immediate communication problem for the space scientist. SPAN provides low-rate communication capability with co-investigators and colleagues, and access to space science data bases and computational facilities. The SPAN utilizes up-to-date hardware and software for computer-to-computer communications allowing binary file transfer and remote log-on capability to over 25 nationwide space science computer systems. SPAN is not discipline or mission dependent with participation from scientists in such fields as magnetospheric, ionospheric, planetary, and solar physics. Basic information on the network and its use are provided. It is anticipated that SPAN will grow rapidly over the next few years, not only from the standpoint of more network nodes, but as scientists become more proficient in the use of telescience, more capability will be needed to satisfy the demands.
Economic models for management of resources in peer-to-peer and grid computing
NASA Astrophysics Data System (ADS)
Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David
2001-07-01
The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.
A Hybrid Cloud Computing Service for Earth Sciences
NASA Astrophysics Data System (ADS)
Yang, C. P.
2016-12-01
Cloud Computing is becoming a norm for providing computing capabilities for advancing Earth sciences including big Earth data management, processing, analytics, model simulations, and many other aspects. A hybrid spatiotemporal cloud computing service is bulit at George Mason NSF spatiotemporal innovation center to meet this demands. This paper will report the service including several aspects: 1) the hardware includes 500 computing services and close to 2PB storage as well as connection to XSEDE Jetstream and Caltech experimental cloud computing environment for sharing the resource; 2) the cloud service is geographically distributed at east coast, west coast, and central region; 3) the cloud includes private clouds managed using open stack and eucalyptus, DC2 is used to bridge these and the public AWS cloud for interoperability and sharing computing resources when high demands surfing; 4) the cloud service is used to support NSF EarthCube program through the ECITE project, ESIP through the ESIP cloud computing cluster, semantics testbed cluster, and other clusters; 5) the cloud service is also available for the earth science communities to conduct geoscience. A brief introduction about how to use the cloud service will be included.
Particle Hydrodynamics with Material Strength for Multi-Layer Orbital Debris Shield Design
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
1999-01-01
Three dimensional simulation of oblique hypervelocity impact on orbital debris shielding places extreme demands on computer resources. Research to date has shown that particle models provide the most accurate and efficient means for computer simulation of shield design problems. In order to employ a particle based modeling approach to the wall plate impact portion of the shield design problem, it is essential that particle codes be augmented to represent strength effects. This report describes augmentation of a Lagrangian particle hydrodynamics code developed by the principal investigator, to include strength effects, allowing for the entire shield impact problem to be represented using a single computer code.
Visser, Bart; De Looze, Michiel; De Graaff, Matthijs; Van Dieën, Jaap
2004-02-05
The objective of the present study was to gain insight into the effects of precision demands and mental pressure on the load of the upper extremity. Two computer mouse tasks were used: an aiming and a tracking task. Upper extremity loading was operationalized as the myo-electric activity of the wrist flexor and extensor and of the trapezius descendens muscles and the applied grip- and click-forces on the computer mouse. Performance measures, reflecting the accuracy in both tasks and the clicking rate in the aiming task, indicated that the levels of the independent variables resulted in distinguishable levels of accuracy and work pace. Precision demands had a small effect on upper extremity loading with a significant increase in the EMG-amplitudes (21%) of the wrist flexors during the aiming tasks. Precision had large effects on performance. Mental pressure had substantial effects on EMG-amplitudes with an increase of 22% in the trapezius when tracking and increases of 41% in the trapezius and 45% and 140% in the wrist extensors and flexors, respectively, when aiming. During aiming, grip- and click-forces increased by 51% and 40% respectively. Mental pressure had small effects on accuracy but large effects on tempo during aiming. Precision demands and mental pressure in aiming and tracking tasks with a computer mouse were found to coincide with increased muscle activity in some upper extremity muscles and increased force exertion on the computer mouse. Mental pressure caused significant effects on these parameters more often than precision demands. Precision and mental pressure were found to have effects on performance, with precision effects being significant for all performance measures studied and mental pressure effects for some of them. The results of this study suggest that precision demands and mental pressure increase upper extremity load, with mental pressure effects being larger than precision effects. The possible role of precision demands as an indirect mental stressor in working conditions is discussed.
ERIC Educational Resources Information Center
Girill, T. R.
1991-01-01
This article continues the description of DFT (Document, Find, Theseus), an online documentation system that provides computer-managed on-demand printing of software manuals as well as the interactive retrieval of reference passages. Document boundaries in the hypertext database are discussed, search vocabulary complexities are described, and text…
Enabling On-Demand Database Computing with MIT SuperCloud Database Management System
2015-09-15
arc.liv.ac.uk/trac/SGE) provides these services and is independent of programming language (C, Fortran, Java , Matlab, etc) or parallel programming...a MySQL database to store DNS records. The DNS records are controlled via a simple web service interface that allows records to be created
A Comparison of Traditional Homework to Computer-Supported Homework
ERIC Educational Resources Information Center
Mendicino, Michael; Razzaq, Leena; Heffernan, Neil T.
2009-01-01
This study compared learning for fifth grade students in two math homework conditions. The paper-and-pencil condition represented traditional homework, with review of problems in class the following day. The Web-based homework condition provided immediate feedback in the form of hints on demand and step-by-step scaffolding. We analyzed the results…
Zhang, Lei; Zhang, Jing
2017-08-07
A Smart Grid (SG) facilitates bidirectional demand-response communication between individual users and power providers with high computation and communication performance but also brings about the risk of leaking users' private information. Therefore, improving the individual power requirement and distribution efficiency to ensure communication reliability while preserving user privacy is a new challenge for SG. Based on this issue, we propose an efficient and privacy-preserving power requirement and distribution aggregation scheme (EPPRD) based on a hierarchical communication architecture. In the proposed scheme, an efficient encryption and authentication mechanism is proposed for better fit to each individual demand-response situation. Through extensive analysis and experiment, we demonstrate how the EPPRD resists various security threats and preserves user privacy while satisfying the individual requirement in a semi-honest model; it involves less communication overhead and computation time than the existing competing schemes.
Zhang, Lei; Zhang, Jing
2017-01-01
A Smart Grid (SG) facilitates bidirectional demand-response communication between individual users and power providers with high computation and communication performance but also brings about the risk of leaking users’ private information. Therefore, improving the individual power requirement and distribution efficiency to ensure communication reliability while preserving user privacy is a new challenge for SG. Based on this issue, we propose an efficient and privacy-preserving power requirement and distribution aggregation scheme (EPPRD) based on a hierarchical communication architecture. In the proposed scheme, an efficient encryption and authentication mechanism is proposed for better fit to each individual demand-response situation. Through extensive analysis and experiment, we demonstrate how the EPPRD resists various security threats and preserves user privacy while satisfying the individual requirement in a semi-honest model; it involves less communication overhead and computation time than the existing competing schemes. PMID:28783122
Static Memory Deduplication for Performance Optimization in Cloud Computing.
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-04-27
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.
Static Memory Deduplication for Performance Optimization in Cloud Computing
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-01-01
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434
U.S. Geological Survey Groundwater Modeling Software: Making Sense of a Complex Natural Resource
Provost, Alden M.; Reilly, Thomas E.; Harbaugh, Arlen W.; Pollock, David W.
2009-01-01
Computer models of groundwater systems simulate the flow of groundwater, including water levels, and the transport of chemical constituents and thermal energy. Groundwater models afford hydrologists a framework on which to organize their knowledge and understanding of groundwater systems, and they provide insights water-resources managers need to plan effectively for future water demands. Building on decades of experience, the U.S. Geological Survey (USGS) continues to lead in the development and application of computer software that allows groundwater models to address scientific and management questions of increasing complexity.
A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks
NASA Technical Reports Server (NTRS)
Cui, Zhenqian
1999-01-01
With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.
Methodological approaches of health technology assessment.
Goodman, C S; Ahn, R
1999-12-01
In this era of evolving health care systems throughout the world, technology remains the substance of health care. Medical informatics comprises a growing contribution to the technologies used in the delivery and management of health care. Diverse, evolving technologies include artificial neural networks, computer-assisted surgery, computer-based patient records, hospital information systems, and more. Decision-makers increasingly demand well-founded information to determine whether or how to develop these technologies, allow them on the market, acquire them, use them, pay for their use, and more. The development and wider use of health technology assessment (HTA) reflects this demand. While HTA offers systematic, well-founded approaches for determining the value of medical informatics technologies, HTA must continue to adapt and refine its methods in response to these evolving technologies. This paper provides a basic overview of HTA principles and methods.
NASA Astrophysics Data System (ADS)
Heller, Johann; Flisgen, Thomas; van Rienen, Ursula
The computation of electromagnetic fields and parameters derived thereof for lossless radio frequency (RF) structures filled with isotropic media is an important task for the design and operation of particle accelerators. Unfortunately, these computations are often highly demanding with regard to computational effort. The entire computational demand of the problem can be reduced using decomposition schemes in order to solve the field problems on standard workstations. This paper presents one of the first detailed comparisons between the recently proposed state-space concatenation approach (SSC) and a direct computation for an accelerator cavity with coupler-elements that break the rotational symmetry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The algorithm develops a single health score for office computers, today just Windows, but we plan to extend this to Apple computers. The score is derived from various parameters, including: CPU Utilization; Memory Utilization; Various Error logs; Disk Problems; and Disk write queue length. It then uses a weighting scheme to balance these parameters and provide an overall health score. By using these parameters, we are not just assessing the theoretical performance of the components of the computer, rather we are using actual performance metrics that are selected to be a more realistic representation of the experience of the personmore » using the computer. This includes compensating for the nature of their use. If there are two identical computers and the user of one places heavy demands on their computer compared with the user of the second computer, the former will have a lower health score. This allows us to provide a 'fit for purpose' score tailored to the assigned user. This is very helpful data to inform the mangers when individual computers need to be replaced. Additionally it provides specific information that can facilitate the fixing of the computer, to extend it's useful lifetime. This presents direct financial savings, time savings for users transferring from one computer to the next, and better environmental stewardship.« less
A survey of GPU-based medical image computing techniques
Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming
2012-01-01
Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080
Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud
Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew
2015-01-01
Background Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. Results We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. Conclusions This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation. PMID:26501966
Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud.
Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew
2015-01-01
Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation.
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
NASA Astrophysics Data System (ADS)
Evans, J. D.; Tislin, D.
2017-12-01
Observations from the Joint Polar Satellite System (JPSS) support National Weather Service (NWS) forecasters, whose Advanced Weather Interactive Processing System (AWIPS) Data Delivery (DD) will access JPSS data products on demand from the National Environmental Satellite, Data, and Information Service (NESDIS) Product Distribution and Access (PDA) service. Based on the Open Geospatial Consortium (OGC) Web Coverage Service, this on-demand service promises broad interoperability and frugal use of data networks by serving only the data that a user needs. But the volume, velocity, and variety of JPSS data products impose several challenges to such a service. It must be efficient to handle large volumes of complex, frequently updated data, and to fulfill many concurrent requests. It must offer flexible data handling and delivery, to work with a diverse and changing collection of data, and to tailor its outputs into products that users need, with minimal coordination between provider and user communities. It must support 24x7 operation, with no pauses in incoming data or user demand; and it must scale to rapid changes in data volume, variety, and demand as new satellites launch, more products come online, and users rely increasingly on the service. We are addressing these challenges in order to build an efficient and effective on-demand JPSS data service. For example, on-demand subsetting by many users at once may overload a server's processing capacity or its disk bandwidth - unless alleviated by spatial indexing, geolocation transforms, or pre-tiling and caching. Filtering by variable (/ band / layer) may also alleviate network loads, and provide fine-grained variable selection; to that end we are investigating how best to provide random access into the variety of spatiotemporal JPSS data products. Finally, producing tailored products (derivatives, aggregations) can boost flexibility for end users; but some tailoring operations may impose significant server loads. Operating this service in a cloud computing environment allows cost-effective scaling during the development and early deployment phases - and perhaps beyond. We will discuss how NESDIS and NWS are assessing and addressing these challenges to provide timely and effective access to JPSS data products for weather forecasters throughout the country.
Choice of Human-Computer Interaction Mode in Stroke Rehabilitation.
Mousavi Hondori, Hossein; Khademi, Maryam; Dodakian, Lucy; McKenzie, Alison; Lopes, Cristina V; Cramer, Steven C
2016-03-01
Advances in technology are providing new forms of human-computer interaction. The current study examined one form of human-computer interaction, augmented reality (AR), whereby subjects train in the real-world workspace with virtual objects projected by the computer. Motor performances were compared with those obtained while subjects used a traditional human-computer interaction, that is, a personal computer (PC) with a mouse. Patients used goal-directed arm movements to play AR and PC versions of the Fruit Ninja video game. The 2 versions required the same arm movements to control the game but had different cognitive demands. With AR, the game was projected onto the desktop, where subjects viewed the game plus their arm movements simultaneously, in the same visual coordinate space. In the PC version, subjects used the same arm movements but viewed the game by looking up at a computer monitor. Among 18 patients with chronic hemiparesis after stroke, the AR game was associated with 21% higher game scores (P = .0001), 19% faster reaching times (P = .0001), and 15% less movement variability (P = .0068), as compared to the PC game. Correlations between game score and arm motor status were stronger with the AR version. Motor performances during the AR game were superior to those during the PC game. This result is due in part to the greater cognitive demands imposed by the PC game, a feature problematic for some patients but clinically useful for others. Mode of human-computer interface influences rehabilitation therapy demands and can be individualized for patients. © The Author(s) 2015.
Demanded competences in the agricultural engineering sector in Spain
NASA Astrophysics Data System (ADS)
Perdigones, A.; García, J. L.; Benavente, R. M.; Tarquis, A. M.
2009-04-01
An engineering education should prepare students, i.e., emerging engineers, to use problem-solving processes that combine creativity and imagination with rigour and discipline. The emphasis on training engineers may be best placed on answering the needs of industry; indeed, many proposals are now being made to try to reduce the gap between the educational and industrial communities. Training in the use of certain skills or competences may be one way of better preparing engineering undergraduates for eventual employment in industry. However, industry's needs in this respect must first be known. The aim of this work was to determine which skills are used by practising agricultural engineers with the aim of incorporating training in their use into our department's teaching curriculum. Three surveys were undertaken to determine which skills are demanded by agricultural engineers in their professional activities in Spain. Surveys were carried out by the Department of Rural Engineering, Technical University of Madrid (Spain), analysing two related degrees (agricultural engineer with a duration of the study plan of three and five years, respectively) during the courses 2006/07 and 2007/08. The first survey determined the competences acquired by the students along their academic studies (371 students interviewed). The second survey determined the skills demanded by the enterprises of the agricultural sector (50 enterprises interviewed). The third survey determined the skills demanded by the agricultural engineers working in the sector (70 engineers interviewed), specifically asking about the computer programs used by practising agricultural engineers. Surveys showed important differences between the competences demanded by the enterprises and the competences acquired by the students at the university. Enterprises mainly demanded general competences (team working, time organizing, and skills with computer programs) and were less interested in specific technical skills (engineering, economy, biological competences). These differences suggest it might be a good idea to increase the amount of time devoted to the skills demanded by the enterprises. The software packages most commonly used by practising engineers were Microsoft Office / Excel (used by 79% of respondents) and CAD (56%), as well as budgeting (27%), statistical (21%), engineering (15%) and GIS (13%) programs. As a result of this survey our university department opened an additional computer suite in order to provide students practical experience in the use of the demanded competences. The results of this survey underline the importance of competence training in this and perhaps other fields of engineering.
NASA Astrophysics Data System (ADS)
Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats
2014-06-01
Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by "Big Data" will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt
Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared overmore » the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.« less
Programming distributed medical applications with XWCH2.
Ben Belgacem, Mohamed; Niinimaki, Marko; Abdennadher, Nabil
2010-01-01
Many medical applications utilise distributed/parallel computing in order to cope with demands of large data or computing power requirements. In this paper, we present a new version of the XtremWeb-CH (XWCH) platform, and demonstrate two medical applications that run on XWCH. The platform is versatile in a way that it supports direct communication between tasks. When tasks cannot communicate directly, warehouses are used as intermediary nodes between "producer" and "consumer" tasks. New features have been developed to provide improved support for writing powerfull distributed applications using an easy API.
A Unified Framework for Periodic, On-Demand, and User-Specified Software Information
NASA Technical Reports Server (NTRS)
Kolano, Paul Z.
2004-01-01
Although grid computing can increase the number of resources available to a user; not all resources on the grid may have a software environment suitable for running a given application. To provide users with the necessary assistance for selecting resources with compatible software environments and/or for automatically establishing such environments, it is necessary to have an accurate source of information about the software installed across the grid. This paper presents a new OGSI-compliant software information service that has been implemented as part of NASA's Information Power Grid project. This service is built on top of a general framework for reconciling information from periodic, on-demand, and user-specified sources. Information is retrieved using standard XPath queries over a single unified namespace independent of the information's source. Two consumers of the provided software information, the IPG Resource Broker and the IPG Neutralization Service, are briefly described.
ERIC Educational Resources Information Center
Technology & Learning, 2008
2008-01-01
When it comes to IT, there has always been an important link between data center control and client flexibility. As computing power increases, so do the potentially crippling threats to security, productivity and financial stability. This article talks about Dell's On-Demand Desktop Streaming solution which is designed to centralize complete…
A distributed parallel storage architecture and its potential application within EOSDIS
NASA Technical Reports Server (NTRS)
Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony
1994-01-01
We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.
Koltun, G.F.
2001-01-01
This report provides data and methods to aid in the hydrologic design or evaluation of impounding reservoirs and side-channel reservoirs used for water supply in Ohio. Data from 117 streamflow-gaging stations throughout Ohio were analyzed by means of nonsequential-mass-curve-analysis techniques to develop relations between storage requirements, water demand, duration, and frequency. Information also is provided on minimum runoff for selected durations and frequencies. Systematic record lengths for the streamflow-gaging stations ranged from about 10 to 75 years; however, in many cases, additional streamflow record was synthesized. For impounding reservoirs, families of curves are provided to facilitate the estimation of storage requirements as a function of demand and the ratio of the 7-day, 2-year low flow to the mean annual flow. Information is provided with which to evaluate separately the effects of evaporation on storage requirements. Comparisons of storage requirements for impounding reservoirs determined by nonsequential-mass-curve-analysis techniques with storage requirements determined by annual-mass-curve techniques that employ probability routing to account for carryover-storage requirements indicate that large differences in computed required storages can result from the two methods, particularly for conditions where demand cannot be met from within-year storage. For side-channel reservoirs, tables of demand-storage-frequency information are provided for a primary pump relation consisting of one variable-speed pump with a pumping capacity that ranges from 0.1 to 20 times demand. Tables of adjustment ratios are provided to facilitate determination of storage requirements for 19 other pump sets consisting of assorted combinations of fixed-speed pumps or variable-speed pumps with aggregate pumping capacities smaller than or equal to the primary pump relation. The effects of evaporation on side-channel reservoir storage requirements are incorporated into the storage-requirement estimates. The effects of an instream-flow requirement equal to the 80-percent-duration flow are also incorporated into the storage-requirement estimates.
Mass casualty events: blood transfusion emergency preparedness across the continuum of care.
Doughty, Heidi; Glasgow, Simon; Kristoffersen, Einar
2016-04-01
Transfusion support is a key enabler to the response to mass casualty events (MCEs). Transfusion demand and capability planning should be an integrated part of the medical planning process for emergency system preparedness. Historical reviews have recently supported demand planning for MCEs and mass gatherings; however, computer modeling offers greater insights for resource management. The challenge remains balancing demand and supply especially the demand for universal components such as group O red blood cells. The current prehospital and hospital capability has benefited from investment in the management of massive hemorrhage. The management of massive hemorrhage should address both hemorrhage control and hemostatic support. Labile blood components cannot be stockpiled and a large surge in demand is a challenge for transfusion providers. The use of blood components may need to be triaged and demand managed. Two contrasting models of transfusion planning for MCEs are described. Both illustrate an integrated approach to preparedness where blood transfusion services work closely with health care providers and the donor community. Preparedness includes appropriate stock management and resupply from other centers. However, the introduction of alternative transfusion products, transfusion triage, and the greater use of an emergency donor panel to provide whole blood may permit greater resilience. © 2016 AABB.
NAS Technical Summaries, March 1993 - February 1994
NASA Technical Reports Server (NTRS)
1995-01-01
NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1993-94 operational year concluded with 448 high-speed processor projects and 95 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.
NAS technical summaries. Numerical aerodynamic simulation program, March 1992 - February 1993
NASA Technical Reports Server (NTRS)
1994-01-01
NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1992-93 operational year concluded with 399 high-speed processor projects and 91 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.
A real-time spike sorting method based on the embedded GPU.
Zelan Yang; Kedi Xu; Xiang Tian; Shaomin Zhang; Xiaoxiang Zheng
2017-07-01
Microelectrode arrays with hundreds of channels have been widely used to acquire neuron population signals in neuroscience studies. Online spike sorting is becoming one of the most important challenges for high-throughput neural signal acquisition systems. Graphic processing unit (GPU) with high parallel computing capability might provide an alternative solution for increasing real-time computational demands on spike sorting. This study reported a method of real-time spike sorting through computing unified device architecture (CUDA) which was implemented on an embedded GPU (NVIDIA JETSON Tegra K1, TK1). The sorting approach is based on the principal component analysis (PCA) and K-means. By analyzing the parallelism of each process, the method was further optimized in the thread memory model of GPU. Our results showed that the GPU-based classifier on TK1 is 37.92 times faster than the MATLAB-based classifier on PC while their accuracies were the same with each other. The high-performance computing features of embedded GPU demonstrated in our studies suggested that the embedded GPU provide a promising platform for the real-time neural signal processing.
Machine learning and computer vision approaches for phenotypic profiling.
Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J
2017-01-02
With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.
Machine learning and computer vision approaches for phenotypic profiling
Morris, Quaid
2017-01-01
With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. PMID:27940887
A depth-first search algorithm to compute elementary flux modes by linear programming.
Quek, Lake-Ee; Nielsen, Lars K
2014-07-30
The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.
Computational Methods for Identification, Optimization and Control of PDE Systems
2010-04-30
focused on the development of numerical methods and software specifically for the purpose of solving control, design, and optimization prob- lems where...that provide the foundations of simulation software must play an important role in any research of this type, the demands placed on numerical methods...y sus Aplicaciones , Ciudad de Cor- doba - Argentina, October 2007. 3. Inverse Problems in Deployable Space Structures, Fourth Conference on Inverse
A Primer on High-Throughput Computing for Genomic Selection
Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel
2011-01-01
High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans. PMID:22303303
Causal Learning with Local Computations
ERIC Educational Resources Information Center
Fernbach, Philip M.; Sloman, Steven A.
2009-01-01
The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require…
ERIC Educational Resources Information Center
Tseng, Min-chen
2014-01-01
This study investigated the online reading performances and the level of visual fatigue from the perspectives of non-native speaking students (NNSs). Reading on a computer screen is more visually more demanding than reading printed text. Online reading requires frequent saccadic eye movements and imposes continuous focusing and alignment demand.…
High-performance scientific computing in the cloud
NASA Astrophysics Data System (ADS)
Jorissen, Kevin; Vila, Fernando; Rehr, John
2011-03-01
Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.
Perspectives on an education in computational biology and medicine.
Rubinstein, Jill C
2012-09-01
The mainstream application of massively parallel, high-throughput assays in biomedical research has created a demand for scientists educated in Computational Biology and Bioinformatics (CBB). In response, formalized graduate programs have rapidly evolved over the past decade. Concurrently, there is increasing need for clinicians trained to oversee the responsible translation of CBB research into clinical tools. Physician-scientists with dedicated CBB training can facilitate such translation, positioning themselves at the intersection between computational biomedical research and medicine. This perspective explores key elements of the educational path to such a position, specifically addressing: 1) evolving perceptions of the role of the computational biologist and the impact on training and career opportunities; 2) challenges in and strategies for obtaining the core skill set required of a biomedical researcher in a computational world; and 3) how the combination of CBB with medical training provides a logical foundation for a career in academic medicine and/or biomedical research.
ASME V\\&V challenge problem: Surrogate-based V&V
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beghini, Lauren L.; Hough, Patricia D.
2015-12-18
The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less
Research on Key Technologies of Cloud Computing
NASA Astrophysics Data System (ADS)
Zhang, Shufen; Yan, Hongcan; Chen, Xuebin
With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.
Computational biology in the cloud: methods and new insights from computing at scale.
Kasson, Peter M
2013-01-01
The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.
Optical interconnects for satellite payloads: overview of the state-of-the-art
NASA Astrophysics Data System (ADS)
Vervaeke, Michael; Debaes, Christof; Van Erps, Jürgen; Karppinen, Mikko; Tanskanen, Antti; Aalto, Timo; Harjanne, Mikko; Thienpont, Hugo
2010-05-01
The increased demand of broadband communication services like High Definition Television, Video On Demand, Triple Play, fuels the technologies to enhance the bandwidth of individual users towards service providers and hence the increase of aggregate bandwidths on terrestial networks. Optical solutions clearly leverage the bandwidth appetite easily whereas electrical interconnection schemes require an ever-increasing effort to counteract signal distortions at higher bitrates. Dense wavelength division multiplexing and all-optical signal regeneration and switching solve the bandwidth demands of network trunks. Fiber-to-the-home, and fiber-to-the-desk are trends towards providing individual users with greatly increased bandwidth. Operators in the satellite telecommunication sector face similar challenges fuelled by the same demands as for their terrestial counterparts. Moreover, the limited number of orbital positions for new satellites set the trend for an increase in payload datacommunication capacity using an ever-increasing number of complex multi-beam active antennas and a larger aggregate bandwidth. Only satellites with very large capacity, high computational density and flexible, transparent fully digital payload solutions achieve affordable communication prices. To keep pace with the bandwidth and flexibility requirements, designers have to come up with systems requiring a total digital througput of a few Tb/s resulting in a high power consuming satellite payload. An estimated 90 % of the total power consumption per chip is used for the off-chip communication lines. We have undertaken a study to assess the viability of optical datacommunication solutions to alleviate the demands regarding power consumption and aggregate bandwidth imposed on future satellite communication payloads. The review on optical interconnects given here is especially focussed on the demands of the satellite communication business and the particular environment in which the optics have to perform their functionality: space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostuk, M.; Uram, T. D.; Evans, T.
For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less
Kostuk, M.; Uram, T. D.; Evans, T.; ...
2018-02-01
For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less
NASA Astrophysics Data System (ADS)
Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.
2017-11-01
Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.
Acquisition of ICU data: concepts and demands.
Imhoff, M
1992-12-01
As the issue of data overload is a problem in critical care today, it is of utmost importance to improve acquisition, storage, integration, and presentation of medical data, which appears only feasible with the help of bedside computers. The data originates from four major sources: (1) the bedside medical devices, (2) the local area network (LAN) of the ICU, (3) the hospital information system (HIS) and (4) manual input. All sources differ markedly in quality and quantity of data and in the demands of the interfaces between source of data and patient database. The demands for data acquisition from bedside medical devices, ICU-LAN and HIS concentrate on technical problems, such as computational power, storage capacity, real-time processing, interfacing with different devices and networks and the unmistakable assignment of data to the individual patient. The main problem of manual data acquisition is the definition and configuration of the user interface that must allow the inexperienced user to interact with the computer intuitively. Emphasis must be put on the construction of a pleasant, logical and easy-to-handle graphical user interface (GUI). Short response times will require high graphical processing capacity. Moreover, high computational resources are necessary in the future for additional interfacing devices such as speech recognition and 3D-GUI. Therefore, in an ICU environment the demands for computational power are enormous. These problems are complicated by the urgent need for friendly and easy-to-handle user interfaces. Both facts place ICU bedside computing at the vanguard of present and future workstation development leaving no room for solutions based on traditional concepts of personal computers.(ABSTRACT TRUNCATED AT 250 WORDS)
The Ethics of Cloud Computing.
de Bruin, Boudewijn; Floridi, Luciano
2017-02-01
Cloud computing is rapidly gaining traction in business. It offers businesses online services on demand (such as Gmail, iCloud and Salesforce) and allows them to cut costs on hardware and IT support. This is the first paper in business ethics dealing with this new technology. It analyzes the informational duties of hosting companies that own and operate cloud computing datacentres (e.g., Amazon). It considers the cloud services providers leasing 'space in the cloud' from hosting companies (e.g., Dropbox, Salesforce). And it examines the business and private 'clouders' using these services. The first part of the paper argues that hosting companies, services providers and clouders have mutual informational (epistemic) obligations to provide and seek information about relevant issues such as consumer privacy, reliability of services, data mining and data ownership. The concept of interlucency is developed as an epistemic virtue governing ethically effective communication. The second part considers potential forms of government restrictions on or proscriptions against the development and use of cloud computing technology. Referring to the concept of technology neutrality, it argues that interference with hosting companies and cloud services providers is hardly ever necessary or justified. It is argued, too, however, that businesses using cloud services (e.g., banks, law firms, hospitals etc. storing client data in the cloud) will have to follow rather more stringent regulations.
Templet Web: the use of volunteer computing approach in PaaS-style cloud
NASA Astrophysics Data System (ADS)
Vostokin, Sergei; Artamonov, Yuriy; Tsarev, Daniil
2018-03-01
This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a) the implementation of "on-demand" access; (b) source code deployment management; (c) high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.
Kelly, Jack; Knottenbelt, William
2015-01-01
Many countries are rolling out smart electricity meters. These measure a home's total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the 'ground truth' demand of individual appliances. In this context, we present UK-DALE: an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses, one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate. We also describe the low-cost, open-source, wireless system we built for collecting our dataset.
A Cloud-Based Infrastructure for Near-Real-Time Processing and Dissemination of NPP Data
NASA Astrophysics Data System (ADS)
Evans, J. D.; Valente, E. G.; Chettri, S. S.
2011-12-01
We are building a scalable cloud-based infrastructure for generating and disseminating near-real-time data products from a variety of geospatial and meteorological data sources, including the new National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP). Our approach relies on linking Direct Broadcast and other data streams to a suite of scientific algorithms coordinated by NASA's International Polar-Orbiter Processing Package (IPOPP). The resulting data products are directly accessible to a wide variety of end-user applications, via industry-standard protocols such as OGC Web Services, Unidata Local Data Manager, or OPeNDAP, using open source software components. The processing chain employs on-demand computing resources from Amazon.com's Elastic Compute Cloud and NASA's Nebula cloud services. Our current prototype targets short-term weather forecasting, in collaboration with NASA's Short-term Prediction Research and Transition (SPoRT) program and the National Weather Service. Direct Broadcast is especially crucial for NPP, whose current ground segment is unlikely to deliver data quickly enough for short-term weather forecasters and other near-real-time users. Direct Broadcast also allows full local control over data handling, from the receiving antenna to end-user applications: this provides opportunities to streamline processes for data ingest, processing, and dissemination, and thus to make interpreted data products (Environmental Data Records) available to practitioners within minutes of data capture at the sensor. Cloud computing lets us grow and shrink computing resources to meet large and rapid fluctuations in data availability (twice daily for polar orbiters) - and similarly large fluctuations in demand from our target (near-real-time) users. This offers a compelling business case for cloud computing: the processing or dissemination systems can grow arbitrarily large to sustain near-real time data access despite surges in data volumes or user demand, but that computing capacity (and hourly costs) can be dropped almost instantly once the surge passes. Cloud computing also allows low-risk experimentation with a variety of machine architectures (processor types; bandwidth, memory, and storage capacities, etc.) and of system configurations (including massively parallel computing patterns). Finally, our service-based approach (in which user applications invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored products on demand. To maximize the usefulness and impact of our technology, we have emphasized open, industry-standard software interfaces. We are also using and developing open source software to facilitate the widespread adoption of similar, derived, or interoperable systems for processing and serving near-real-time data from NPP and other sources.
Globus | Informatics Technology for Cancer Research (ITCR)
Globus software services provide secure cancer research data transfer, synchronization, and sharing in distributed environments at large scale. These services can be integrated into applications and research data gateways, leveraging Globus identity management, single sign-on, search, and authorization capabilities. Globus Genomics integrates Globus with the Galaxy genomics workflow engine and Amazon Web Services to enable cancer genomics analysis that can elastically scale compute resources with demand.
A Big Data Platform for Storing, Accessing, Mining and Learning Geospatial Data
NASA Astrophysics Data System (ADS)
Yang, C. P.; Bambacus, M.; Duffy, D.; Little, M. M.
2017-12-01
Big Data is becoming a norm in geoscience domains. A platform that is capable to effiently manage, access, analyze, mine, and learn the big data for new information and knowledge is desired. This paper introduces our latest effort on developing such a platform based on our past years' experiences on cloud and high performance computing, analyzing big data, comparing big data containers, and mining big geospatial data for new information. The platform includes four layers: a) the bottom layer includes a computing infrastructure with proper network, computer, and storage systems; b) the 2nd layer is a cloud computing layer based on virtualization to provide on demand computing services for upper layers; c) the 3rd layer is big data containers that are customized for dealing with different types of data and functionalities; d) the 4th layer is a big data presentation layer that supports the effient management, access, analyses, mining and learning of big geospatial data.
Information technology challenges of biodiversity and ecosystems informatics
Schnase, J.L.; Cushing, J.; Frame, M.; Frondorf, A.; Landis, E.; Maier, D.; Silberschatz, A.
2003-01-01
Computer scientists, biologists, and natural resource managers recently met to examine the prospects for advancing computer science and information technology research by focusing on the complex and often-unique challenges found in the biodiversity and ecosystem domain. The workshop and its final report reveal that the biodiversity and ecosystem sciences are fundamentally information sciences and often address problems having distinctive attributes of scale and socio-technical complexity. The paper provides an overview of the emerging field of biodiversity and ecosystem informatics and demonstrates how the demands of biodiversity and ecosystem research can advance our understanding and use of information technologies.
OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics
NASA Astrophysics Data System (ADS)
Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.
2014-12-01
OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user community comes new requests for algorithms and processing capabilities. To address this demand, OT is developing an extensible service based architecture for integrating community-developed software. This "plugable" approach to Web service deployment will enable new processing and analysis tools to run collocated with OT hosted data.
Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*
Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.
2015-01-01
Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363
Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.
Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L
2015-02-01
Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
SPACEWAY: Providing affordable and versatile communication solutions
NASA Astrophysics Data System (ADS)
Fitzpatrick, E. J.
1995-08-01
By the end of this decade, Hughes' SPACEWAY network will provide the first interactive 'bandwidth on demand' communication services for a variety of applications. High quality digital voice, interactive video, global access to multimedia databases, and transborder workgroup computing will make SPACEWAY an essential component of the computer-based workplace of the 21st century. With relatively few satellites to construct, insure, and launch -- plus extensive use of cost-effective, tightly focused spot beams on the world's most populated areas -- the high capacity SPACEWAY system can pass its significant cost savings onto its customers. The SPACEWAY network is different from other proposed global networks in that its geostationary orbit location makes it a truly market driven system: each satellite will make available extensive telecom services to hundreds of millions of people within the continuous view of that satellite, providing immediate capacity within a specific region of the world.
SPACEWAY: Providing affordable and versatile communication solutions
NASA Technical Reports Server (NTRS)
Fitzpatrick, E. J.
1995-01-01
By the end of this decade, Hughes' SPACEWAY network will provide the first interactive 'bandwidth on demand' communication services for a variety of applications. High quality digital voice, interactive video, global access to multimedia databases, and transborder workgroup computing will make SPACEWAY an essential component of the computer-based workplace of the 21st century. With relatively few satellites to construct, insure, and launch -- plus extensive use of cost-effective, tightly focused spot beams on the world's most populated areas -- the high capacity SPACEWAY system can pass its significant cost savings onto its customers. The SPACEWAY network is different from other proposed global networks in that its geostationary orbit location makes it a truly market driven system: each satellite will make available extensive telecom services to hundreds of millions of people within the continuous view of that satellite, providing immediate capacity within a specific region of the world.
Biomanufacturing: a US-China National Science Foundation-sponsored workshop.
Sun, Wei; Yan, Yongnian; Lin, Feng; Spector, Myron
2006-05-01
A recent US-China National Science Foundation-sponsored workshop on biomanufacturing reviewed the state-of-the-art of an array of new technologies for producing scaffolds for tissue engineering, providing precision multi-scale control of material, architecture, and cells. One broad category of such techniques has been termed solid freeform fabrication. The techniques in this category include: stereolithography, selected laser sintering, single- and multiple-nozzle deposition and fused deposition modeling, and three-dimensional printing. The precise and repetitive placement of material and cells in a three-dimensional construct at the micrometer length scale demands computer control. These novel computer-controlled scaffold production techniques, when coupled with computer-based imaging and structural modeling methods for the production of the templates for the scaffolds, define an emerging field of computer-aided tissue engineering. In formulating the questions that remain to be answered and discussing the knowledge required to further advance the field, the Workshop provided a basis for recommendations for future work.
46 CFR 111.60-7 - Demand loads.
Code of Federal Regulations, 2010 CFR
2010-10-01
... REQUIREMENTS Wiring Materials and Methods § 111.60-7 Demand loads. Generator, feeder, and bus-tie cables must be selected on the basis of a computed load of not less than the demand load given in Table 111.60-7... 46 Shipping 4 2010-10-01 2010-10-01 false Demand loads. 111.60-7 Section 111.60-7 Shipping COAST...
46 CFR 111.60-7 - Demand loads.
Code of Federal Regulations, 2011 CFR
2011-10-01
... REQUIREMENTS Wiring Materials and Methods § 111.60-7 Demand loads. Generator, feeder, and bus-tie cables must be selected on the basis of a computed load of not less than the demand load given in Table 111.60-7... 46 Shipping 4 2011-10-01 2011-10-01 false Demand loads. 111.60-7 Section 111.60-7 Shipping COAST...
Model documentation report: Residential sector demand module of the national energy modeling system
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This report documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code. This reference document provides a detailed description for energy analysts, other users, and the public. The NEMS Residential Sector Demand Module is currently used for mid-term forecasting purposes and energy policy analysis over the forecast horizon of 1993 through 2020. The model generates forecasts of energy demand for the residential sector by service, fuel, and Census Division. Policy impacts resulting from new technologies,more » market incentives, and regulatory changes can be estimated using the module. 26 refs., 6 figs., 5 tabs.« less
Exploring the use of I/O nodes for computation in a MIMD multiprocessor
NASA Technical Reports Server (NTRS)
Kotz, David; Cai, Ting
1995-01-01
As parallel systems move into the production scientific-computing world, the emphasis will be on cost-effective solutions that provide high throughput for a mix of applications. Cost effective solutions demand that a system make effective use of all of its resources. Many MIMD multiprocessors today, however, distinguish between 'compute' and 'I/O' nodes, the latter having attached disks and being dedicated to running the file-system server. This static division of responsibilities simplifies system management but does not necessarily lead to the best performance in workloads that need a different balance of computation and I/O. Of course, computational processes sharing a node with a file-system service may receive less CPU time, network bandwidth, and memory bandwidth than they would on a computation-only node. In this paper we begin to examine this issue experimentally. We found that high performance I/O does not necessarily require substantial CPU time, leaving plenty of time for application computation. There were some complex file-system requests, however, which left little CPU time available to the application. (The impact on network and memory bandwidth still needs to be determined.) For applications (or users) that cannot tolerate an occasional interruption, we recommend that they continue to use only compute nodes. For tolerant applications needing more cycles than those provided by the compute nodes, we recommend that they take full advantage of both compute and I/O nodes for computation, and that operating systems should make this possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Ranjan; Chelmis, Charalampos; Aman, Saima
The advent of smart meters and advanced communication infrastructures catalyzes numerous smart grid applications such as dynamic demand response, and paves the way to solve challenging research problems in sustainable energy consumption. The space of solution possibilities are restricted primarily by the huge amount of generated data requiring considerable computational resources and efficient algorithms. To overcome this Big Data challenge, data clustering techniques have been proposed. Current approaches however do not scale in the face of the “increasing dimensionality” problem where a cluster point is represented by the entire customer consumption time series. To overcome this aspect we first rethinkmore » the way cluster points are created and designed, and then design an efficient online clustering technique for demand response (DR) in order to analyze high volume, high dimensional energy consumption time series data at scale, and on the fly. Our online algorithm is randomized in nature, and provides optimal performance guarantees in a computationally efficient manner. Unlike prior work we (i) study the consumption properties of the whole population simultaneously rather than developing individual models for each customer separately, claiming it to be a ‘killer’ approach that breaks the “curse of dimensionality” in online time series clustering, and (ii) provide tight performance guarantees in theory to validate our approach. Our insights are driven by the field of sociology, where collective behavior often emerges as the result of individual patterns and lifestyles.« less
Heuristic Scheduling in Grid Environments: Reducing the Operational Energy Demand
NASA Astrophysics Data System (ADS)
Bodenstein, Christian
In a world where more and more businesses seem to trade in an online market, the supply of online services to the ever-growing demand could quickly reach its capacity limits. Online service providers may find themselves maxed out at peak operation levels during high-traffic timeslots but too little demand during low-traffic timeslots, although the latter is becoming less frequent. At this point deciding which user is allocated what level of service becomes essential. The concept of Grid computing could offer a meaningful alternative to conventional super-computing centres. Not only can Grids reach the same computing speeds as some of the fastest supercomputers, but distributed computing harbors a great energy-saving potential. When scheduling projects in such a Grid environment however, simply assigning one process to a system becomes so complex in calculation that schedules are often too late to execute, rendering their optimizations useless. Current schedulers attempt to maximize the utility, given some sort of constraint, often reverting to heuristics. This optimization often comes at the cost of environmental impact, in this case CO 2 emissions. This work proposes an alternate model of energy efficient scheduling while keeping a respectable amount of economic incentives untouched. Using this model, it is possible to reduce the total energy consumed by a Grid environment using 'just-in-time' flowtime management, paired with ranking nodes by efficiency.
A depth-first search algorithm to compute elementary flux modes by linear programming
2014-01-01
Background The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Results Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. Conclusions The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints. PMID:25074068
Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds
NASA Astrophysics Data System (ADS)
Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni
2012-09-01
Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.
A tool for modeling concurrent real-time computation
NASA Technical Reports Server (NTRS)
Sharma, D. D.; Huang, Shie-Rei; Bhatt, Rahul; Sridharan, N. S.
1990-01-01
Real-time computation is a significant area of research in general, and in AI in particular. The complexity of practical real-time problems demands use of knowledge-based problem solving techniques while satisfying real-time performance constraints. Since the demands of a complex real-time problem cannot be predicted (owing to the dynamic nature of the environment) powerful dynamic resource control techniques are needed to monitor and control the performance. A real-time computation model for a real-time tool, an implementation of the QP-Net simulator on a Symbolics machine, and an implementation on a Butterfly multiprocessor machine are briefly described.
Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools
NASA Astrophysics Data System (ADS)
Boe, Bryce A.
There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.
CE-ACCE: The Cloud Enabled Advanced sCience Compute Environment
NASA Astrophysics Data System (ADS)
Cinquini, L.; Freeborn, D. J.; Hardman, S. H.; Wong, C.
2017-12-01
Traditionally, Earth Science data from NASA remote sensing instruments has been processed by building custom data processing pipelines (often based on a common workflow engine or framework) which are typically deployed and run on an internal cluster of computing resources. This approach has some intrinsic limitations: it requires each mission to develop and deploy a custom software package on top of the adopted framework; it makes use of dedicated hardware, network and storage resources, which must be specifically purchased, maintained and re-purposed at mission completion; and computing services cannot be scaled on demand beyond the capability of the available servers.More recently, the rise of Cloud computing, coupled with other advances in containerization technology (most prominently, Docker) and micro-services architecture, has enabled a new paradigm, whereby space mission data can be processed through standard system architectures, which can be seamlessly deployed and scaled on demand on either on-premise clusters, or commercial Cloud providers. In this talk, we will present one such architecture named CE-ACCE ("Cloud Enabled Advanced sCience Compute Environment"), which we have been developing at the NASA Jet Propulsion Laboratory over the past year. CE-ACCE is based on the Apache OODT ("Object Oriented Data Technology") suite of services for full data lifecycle management, which are turned into a composable array of Docker images, and complemented by a plug-in model for mission-specific customization. We have applied this infrastructure to both flying and upcoming NASA missions, such as ECOSTRESS and SMAP, and demonstrated deployment on the Amazon Cloud, either using simple EC2 instances, or advanced AWS services such as Amazon Lambda and ECS (EC2 Container Services).
ERIC Educational Resources Information Center
Wlodyga, Linda J.
2010-01-01
In an attempt to prepare new graduate nurses to meet the demands of health care delivery systems, the use of computer-based clinical information systems that combine hands-on experience with computer based information systems was explored. Since the introduction of Electronic Medical Records (EMR) nearly two decades ago, the demand for nurses to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.; Gross, R.; Goble, W
The safety integrity level (SIL) of equipment used in safety instrumented functions is determined by the average probability of failure on demand (PFDavg) computed at the time of periodic inspection and maintenance, i.e., the time of proof testing. The computation of PFDavg is generally based solely on predictions or estimates of the assumed constant failure rate of the equipment. However, PFDavg is also affected by maintenance actions (or lack thereof) taken by the end user. This paper shows how maintenance actions can affect the PFDavg of spring operated pressure relief valves (SOPRV) and how these maintenance actions may be accountedmore » for in the computation of the PFDavg metric. The method provides a means for quantifying the effects of changes in maintenance practices and shows how these changes impact plant safety.« less
BCM: toolkit for Bayesian analysis of Computational Models using samplers.
Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A
2016-10-21
Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.
NASA Astrophysics Data System (ADS)
Pierce, S. A.
2017-12-01
Decision making for groundwater systems is becoming increasingly important, as shifting water demands increasingly impact aquifers. As buffer systems, aquifers provide room for resilient responses and augment the actual timeframe for hydrological response. Yet the pace impacts, climate shifts, and degradation of water resources is accelerating. To meet these new drivers, groundwater science is transitioning toward the emerging field of Integrated Water Resources Management, or IWRM. IWRM incorporates a broad array of dimensions, methods, and tools to address problems that tend to be complex. Computational tools and accessible cyberinfrastructure (CI) are needed to cross the chasm between science and society. Fortunately cloud computing environments, such as the new Jetstream system, are evolving rapidly. While still targeting scientific user groups systems such as, Jetstream, offer configurable cyberinfrastructure to enable interactive computing and data analysis resources on demand. The web-based interfaces allow researchers to rapidly customize virtual machines, modify computing architecture and increase the usability and access for broader audiences to advanced compute environments. The result enables dexterous configurations and opening up opportunities for IWRM modelers to expand the reach of analyses, number of case studies, and quality of engagement with stakeholders and decision makers. The acute need to identify improved IWRM solutions paired with advanced computational resources refocuses the attention of IWRM researchers on applications, workflows, and intelligent systems that are capable of accelerating progress. IWRM must address key drivers of community concern, implement transdisciplinary methodologies, adapt and apply decision support tools in order to effectively support decisions about groundwater resource management. This presentation will provide an overview of advanced computing services in the cloud using integrated groundwater management case studies to highlight how Cloud CI streamlines the process for setting up an interactive decision support system. Moreover, advances in artificial intelligence offer new techniques for old problems from integrating data to adaptive sensing or from interactive dashboards to optimizing multi-attribute problems. The combination of scientific expertise, flexible cloud computing solutions, and intelligent systems opens new research horizons.
Metric Use in the Tool Industry. A Status Report and a Test of Assessment Methodology.
1982-04-20
Weights and Measures) CIM - Computer-Integrated Manufacturing CNC - Computer Numerical Control DOD - Department of Defense DODISS - DOD Index of...numerically-controlled ( CNC ) machines that have an inch-millimeter selection switch and a corresponding dual readout scale. S -4- The use of both metric...satisfactorily met the demands of both domestic and foreign customers for metric machine tools by providing either metric- capable machines or NC and CNC
The CompTox Chemistry Dashboard - A Community Data Resource for Environmental Chemistry
Despite an abundance of online databases providing access to chemical data, there is increasing demand for high-quality, structure-curated, open data to meet the various needs of the environmental sciences and computational toxicology communities. The U.S. Environmental Protectio...
Control of Transitional and Turbulent Flows Using Plasma-Based Actuators
2006-06-01
by means of asymmetric dielectric-barrier-discharge ( DBD ) actuators is presented. The flow fields are simulated employ- ing an extensively validated...effective use of DBD devices. As a consequence, meaningful computations require the use of three-dimensional large-eddy simulation approaches capable of...counter-flow DBD actuator is shown to provide an effective on-demand tripping device . This prop- erty is exploited for the suppression of laminar
Utility Computing: Reality and Beyond
NASA Astrophysics Data System (ADS)
Ivanov, Ivan I.
Utility Computing is not a new concept. It involves organizing and providing a wide range of computing-related services as public utilities. Much like water, gas, electricity and telecommunications, the concept of computing as public utility was announced in 1955. Utility Computing remained a concept for near 50 years. Now some models and forms of Utility Computing are emerging such as storage and server virtualization, grid computing, and automated provisioning. Recent trends in Utility Computing as a complex technology involve business procedures that could profoundly transform the nature of companies' IT services, organizational IT strategies and technology infrastructure, and business models. In the ultimate Utility Computing models, organizations will be able to acquire as much IT services as they need, whenever and wherever they need them. Based on networked businesses and new secure online applications, Utility Computing would facilitate "agility-integration" of IT resources and services within and between virtual companies. With the application of Utility Computing there could be concealment of the complexity of IT, reduction of operational expenses, and converting of IT costs to variable `on-demand' services. How far should technology, business and society go to adopt Utility Computing forms, modes and models?
Duct flow nonuniformities for Space Shuttle Main Engine (SSME)
NASA Technical Reports Server (NTRS)
1987-01-01
A three-duct Space Shuttle Main Engine (SSME) Hot Gas Manifold geometry code was developed for use. The methodology of the program is described, recommendations on its implementation made, and an input guide, input deck listing, and a source code listing provided. The code listing is strewn with an abundance of comments to assist the user in following its development and logic. A working source deck will be provided. A thorough analysis was made of the proper boundary conditions and chemistry kinetics necessary for an accurate computational analysis of the flow environment in the SSME fuel side preburner chamber during the initial startup transient. Pertinent results were presented to facilitate incorporation of these findings into an appropriate CFD code. The computation must be a turbulent computation, since the flow field turbulent mixing will have a profound effect on the chemistry. Because of the additional equations demanded by the chemistry model it is recommended that for expediency a simple algebraic mixing length model be adopted. Performing this computation for all or selected time intervals of the startup time will require an abundance of computer CPU time regardless of the specific CFD code selected.
GeoBrain Computational Cyber-laboratory for Earth Science Studies
NASA Astrophysics Data System (ADS)
Deng, M.; di, L.
2009-12-01
Computational approaches (e.g., computer-based data visualization, analysis and modeling) are critical for conducting increasingly data-intensive Earth science (ES) studies to understand functions and changes of the Earth system. However, currently Earth scientists, educators, and students have met two major barriers that prevent them from being effectively using computational approaches in their learning, research and application activities. The two barriers are: 1) difficulties in finding, obtaining, and using multi-source ES data; and 2) lack of analytic functions and computing resources (e.g., analysis software, computing models, and high performance computing systems) to analyze the data. Taking advantages of recent advances in cyberinfrastructure, Web service, and geospatial interoperability technologies, GeoBrain, a project funded by NASA, has developed a prototype computational cyber-laboratory to effectively remove the two barriers. The cyber-laboratory makes ES data and computational resources at large organizations in distributed locations available to and easily usable by the Earth science community through 1) enabling seamless discovery, access and retrieval of distributed data, 2) federating and enhancing data discovery with a catalogue federation service and a semantically-augmented catalogue service, 3) customizing data access and retrieval at user request with interoperable, personalized, and on-demand data access and services, 4) automating or semi-automating multi-source geospatial data integration, 5) developing a large number of analytic functions as value-added, interoperable, and dynamically chainable geospatial Web services and deploying them in high-performance computing facilities, 6) enabling the online geospatial process modeling and execution, and 7) building a user-friendly extensible web portal for users to access the cyber-laboratory resources. Users can interactively discover the needed data and perform on-demand data analysis and modeling through the web portal. The GeoBrain cyber-laboratory provides solutions to meet common needs of ES research and education, such as, distributed data access and analysis services, easy access to and use of ES data, and enhanced geoprocessing and geospatial modeling capability. It greatly facilitates ES research, education, and applications. The development of the cyber-laboratory provides insights, lessons-learned, and technology readiness to build more capable computing infrastructure for ES studies, which can meet wide-range needs of current and future generations of scientists, researchers, educators, and students for their formal or informal educational training, research projects, career development, and lifelong learning.
Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?
Madhyastha, Tara M; Koh, Natalie; Day, Trevor K M; Hernández-Fernández, Moises; Kelley, Austin; Peterson, Daniel J; Rajan, Sabreena; Woelfer, Karl A; Wolf, Jonathan; Grabowski, Thomas J
2017-01-01
The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS) to execute neuroimaging workflows "in the cloud." Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster.
NASA Astrophysics Data System (ADS)
Xu, Boyi; Xu, Li Da; Fei, Xiang; Jiang, Lihong; Cai, Hongming; Wang, Shuai
2017-08-01
Facing the rapidly changing business environments, implementation of flexible business process is crucial, but difficult especially in data-intensive application areas. This study aims to provide scalable and easily accessible information resources to leverage business process management. In this article, with a resource-oriented approach, enterprise data resources are represented as data-centric Web services, grouped on-demand of business requirement and configured dynamically to adapt to changing business processes. First, a configurable architecture CIRPA involving information resource pool is proposed to act as a scalable and dynamic platform to virtualise enterprise information resources as data-centric Web services. By exposing data-centric resources as REST services in larger granularities, tenant-isolated information resources could be accessed in business process execution. Second, dynamic information resource pool is designed to fulfil configurable and on-demand data accessing in business process execution. CIRPA also isolates transaction data from business process while supporting diverse business processes composition. Finally, a case study of using our method in logistics application shows that CIRPA provides an enhanced performance both in static service encapsulation and dynamic service execution in cloud computing environment.
Home-Based Computer Gaming in Vestibular Rehabilitation of Gaze and Balance Impairment.
Szturm, Tony; Reimer, Karen M; Hochman, Jordan
2015-06-01
Disease or damage of the vestibular sense organs cause a range of distressing symptoms and functional problems that could include loss of balance, gaze instability, disorientation, and dizziness. A novel computer-based rehabilitation system with therapeutic gaming application has been developed. This method allows different gaze and head movement exercises to be coupled to a wide range of inexpensive, commercial computer games. It can be used in standing, and thus graded balance demands using a sponge pad can be incorporated into the program. A case series pre- and postintervention study was conducted of nine adults diagnosed with peripheral vestibular dysfunction who received a 12-week home rehabilitation program. The feasibility and usability of the home computer-based therapeutic program were established. Study findings revealed that using head rotation to interact with computer games, when coupled to demanding balance conditions, resulted in significant improvements in standing balance, dynamic visual acuity, gaze control, and walking performance. Perception of dizziness as measured by the Dizziness Handicap Inventory also decreased significantly. These preliminary findings provide support that a low-cost home game-based exercise program is well suited to train standing balance and gaze control (with active and passive head motion).
California DREAMing: The design of residential demand responsive technology with people in mind
NASA Astrophysics Data System (ADS)
Peffer, Therese Evelyn
Electrical utilities worldwide are exploring "demand response" programs to reduce electricity consumption during peak periods. Californian electrical utilities would like to pass the higher cost of peak demand to customers to offset costs, increase reliability, and reduce peak consumption. Variable pricing strategies require technology to communicate a dynamic price to customers and respond to that price. However, evidence from thermostat and energy display studies as well as research regarding energy-saving behaviors suggests that devices cannot effect residential demand response without the sanction and participation of people. This study developed several technologies to promote or enable residential demand response. First, along with a team of students and professors, I designed and tested the Demand Response Electrical Appliance Manager (DREAM). This wireless network of sensors, actuators, and controller with a user interface provides information to intelligently control a residential heating and cooling system and to inform people of their energy usage. We tested the system with computer simulation and in the laboratory and field. Secondly, as part of my contribution to the team, I evaluated machine-learning to predict a person's seasonal temperature preferences by analyzing existing data from office workers. The third part of the research involved developing an algorithm that generated temperature setpoints based on outdoor temperature. My study compared the simulated energy use using these setpoints to that using the setpoints of a programmable thermostat. Finally, I developed and tested a user interface for a thermostat and in-home energy display. This research tested the effects of both energy versus price information and the context of sponsorship on the behavior of subjects. I also surveyed subjects on the usefulness of various displays. The wireless network succeeded in providing detailed data to enable an intelligent controller and provide feedback to the users. The learning algorithm showed mixed results. The adaptive temperature setpoints saved energy in both annual and summertime simulations. The context in which I introduced the DREAM interface affected behavior, but the type of information displayed did not. The subjects responded that appliance-level feedback and tools that provided choices would be useful in a dynamic tariff environment.
THE EFFECTS OF MAINTENANCE ACTIONS ON THE PFDavg OF SPRING OPERATED PRESSURE RELIEF VALVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.; Gross, R.
2014-04-01
The safety integrity level (SIL) of equipment used in safety instrumented functions is determined by the average probability of failure on demand (PFDavg) computed at the time of periodic inspection and maintenance, i.e., the time of proof testing. The computation of PFDavg is generally based solely on predictions or estimates of the assumed constant failure rate of the equipment. However, PFDavg is also affected by maintenance actions (or lack thereof) taken by the end user. This paper shows how maintenance actions can affect the PFDavg of spring operated pressure relief valves (SOPRV) and how these maintenance actions may be accountedmore » for in the computation of the PFDavg metric. The method provides a means for quantifying the effects of changes in maintenance practices and shows how these changes impact plant safety.« less
The Effects of Maintenance Actions on the PFDavg of Spring Operated Pressure Relief Valves
Harris, S.; Gross, R.; Goble, W; ...
2015-12-01
The safety integrity level (SIL) of equipment used in safety instrumented functions is determined by the average probability of failure on demand (PFDavg) computed at the time of periodic inspection and maintenance, i.e., the time of proof testing. The computation of PFDavg is generally based solely on predictions or estimates of the assumed constant failure rate of the equipment. However, PFDavg is also affected by maintenance actions (or lack thereof) taken by the end user. This paper shows how maintenance actions can affect the PFDavg of spring operated pressure relief valves (SOPRV) and how these maintenance actions may be accountedmore » for in the computation of the PFDavg metric. The method provides a means for quantifying the effects of changes in maintenance practices and shows how these changes impact plant safety.« less
WE-B-BRD-01: Innovation in Radiation Therapy Planning II: Cloud Computing in RT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, K; Kagadis, G; Xing, L
As defined by the National Institute of Standards and Technology, cloud computing is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Despite the omnipresent role of computers in radiotherapy, cloud computing has yet to achieve widespread adoption in clinical or research applications, though the transition to such “on-demand” access is underway. As this transition proceeds, new opportunities for aggregate studies and efficient use of computational resources are set againstmore » new challenges in patient privacy protection, data integrity, and management of clinical informatics systems. In this Session, current and future applications of cloud computing and distributed computational resources will be discussed in the context of medical imaging, radiotherapy research, and clinical radiation oncology applications. Learning Objectives: Understand basic concepts of cloud computing. Understand how cloud computing could be used for medical imaging applications. Understand how cloud computing could be employed for radiotherapy research.4. Understand how clinical radiotherapy software applications would function in the cloud.« less
Exploring Cloud Computing for Large-scale Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Han, Binh; Yin, Jian
This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less
Cloud Computing. Technology Briefing. Number 1
ERIC Educational Resources Information Center
Alberta Education, 2013
2013-01-01
Cloud computing is Internet-based computing in which shared resources, software and information are delivered as a service that computers or mobile devices can access on demand. Cloud computing is already used extensively in education. Free or low-cost cloud-based services are used daily by learners and educators to support learning, social…
Mesoscale energy deposition footprint model for kiloelectronvolt cluster bombardment of solids.
Russo, Michael F; Garrison, Barbara J
2006-10-15
Molecular dynamics simulations have been performed to model 5-keV C60 and Au3 projectile bombardment of an amorphous water substrate. The goal is to obtain detailed insights into the dynamics of motion in order to develop a straightforward and less computationally demanding model of the process of ejection. The molecular dynamics results provide the basis for the mesoscale energy deposition footprint model. This model provides a method for predicting relative yields based on information from less than 1 ps of simulation time.
Jade: using on-demand cloud analysis to give scientists back their flow
NASA Astrophysics Data System (ADS)
Robinson, N.; Tomlinson, J.; Hilson, A. J.; Arribas, A.; Powell, T.
2017-12-01
The UK's Met Office generates 400 TB weather and climate data every day by running physical models on its Top 20 supercomputer. As data volumes explode, there is a danger that analysis workflows become dominated by watching progress bars, and not thinking about science. We have been researching how we can use distributed computing to allow analysts to process these large volumes of high velocity data in a way that's easy, effective and cheap.Our prototype analysis stack, Jade, tries to encapsulate this. Functionality includes: An under-the-hood Dask engine which parallelises and distributes computations, without the need to retrain analysts Hybrid compute clusters (AWS, Alibaba, and local compute) comprising many thousands of cores Clusters which autoscale up/down in response to calculation load using Kubernetes, and balances the cluster across providers based on the current price of compute Lazy data access from cloud storage via containerised OpenDAP This technology stack allows us to perform calculations many orders of magnitude faster than is possible on local workstations. It is also possible to outperform dedicated local compute clusters, as cloud compute can, in principle, scale to much larger scales. The use of ephemeral compute resources also makes this implementation cost efficient.
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
NASA Astrophysics Data System (ADS)
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations
NASA Astrophysics Data System (ADS)
Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.
2016-07-01
Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.
Secure Genomic Computation through Site-Wise Encryption
Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu
2015-01-01
Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients’ genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds. PMID:26306278
Hanse, J J; Forsman, M
2001-02-01
A method for psychosocial evaluation of potentially stressful or unsatisfactory situations in manual work was developed. It focuses on subjective responses regarding specific situations and is based on interactive worker assessment when viewing video recordings of oneself. The worker is first video-recorded during work. The video is then displayed on the computer terminal, and the filmed worker clicks on virtual controls on the screen whenever an unsatisfactory psychosocial situation appears; a window of questions regarding psychological demands, mental strain and job control is then opened. A library with pictorial information and comments on the selected situations is formed in the computer. The evaluation system, called PSIDAR, was applied in two case studies, one of manual materials handling in an automotive workshop and one of a group of workers producing and testing instrument panels. The findings indicate that PSIDAR can provide data that are useful in a participatory ergonomic process of change.
Secure Genomic Computation through Site-Wise Encryption.
Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu
2015-01-01
Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients' genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds.
GSKY: A scalable distributed geospatial data server on the cloud
NASA Astrophysics Data System (ADS)
Rozas Larraondo, Pablo; Pringle, Sean; Antony, Joseph; Evans, Ben
2017-04-01
Earth systems, environmental and geophysical datasets are an extremely valuable sources of information about the state and evolution of the Earth. Being able to combine information coming from different geospatial collections is in increasing demand by the scientific community, and requires managing and manipulating data with different formats and performing operations such as map reprojections, resampling and other transformations. Due to the large data volume inherent in these collections, storing multiple copies of them is unfeasible and so such data manipulation must be performed on-the-fly using efficient, high performance techniques. Ideally this should be performed using a trusted data service and common system libraries to ensure wide use and reproducibility. Recent developments in distributed computing based on dynamic access to significant cloud infrastructure opens the door for such new ways of processing geospatial data on demand. The National Computational Infrastructure (NCI), hosted at the Australian National University (ANU), has over 10 Petabytes of nationally significant research data collections. Some of these collections, which comprise a variety of observed and modelled geospatial data, are now made available via a highly distributed geospatial data server, called GSKY (pronounced [jee-skee]). GSKY supports on demand processing of large geospatial data products such as satellite earth observation data as well as numerical weather products, allowing interactive exploration and analysis of the data. It dynamically and efficiently distributes the required computations among cloud nodes providing a scalable analysis framework that can adapt to serve large number of concurrent users. Typical geospatial workflows handling different file formats and data types, or blending data in different coordinate projections and spatio-temporal resolutions, is handled transparently by GSKY. This is achieved by decoupling the data ingestion and indexing process as an independent service. An indexing service crawls data collections either locally or remotely by extracting, storing and indexing all spatio-temporal metadata associated with each individual record. GSKY provides the user with the ability of specifying how ingested data should be aggregated, transformed and presented. It presents an OGC standards-compliant interface, allowing ready accessibility for users of the data via Web Map Services (WMS), Web Processing Services (WPS) or raw data arrays using Web Coverage Services (WCS). The presentation will show some cases where we have used this new capability to provide a significant improvement over previous approaches.
Performance Evaluation of Resource Management in Cloud Computing Environments.
Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci
2015-01-01
Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.
Performance Evaluation of Resource Management in Cloud Computing Environments
Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci
2015-01-01
Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730
Computer code for analyzing the performance of aquifer thermal energy storage systems
NASA Astrophysics Data System (ADS)
Vail, L. W.; Kincaid, C. T.; Kannberg, L. D.
1985-05-01
A code called Aquifer Thermal Energy Storage System Simulator (ATESSS) has been developed to analyze the operational performance of ATES systems. The ATESSS code provides an ability to examine the interrelationships among design specifications, general operational strategies, and unpredictable variations in the demand for energy. The uses of the code can vary the well field layout, heat exchanger size, and pumping/injection schedule. Unpredictable aspects of supply and demand may also be examined through the use of a stochastic model of selected system parameters. While employing a relatively simple model of the aquifer, the ATESSS code plays an important role in the design and operation of ATES facilities by augmenting experience provided by the relatively few field experiments and demonstration projects. ATESSS has been used to characterize the effect of different pumping/injection schedules on a hypothetical ATES system and to estimate the recovery at the St. Paul, Minnesota, field experiment.
Computational analysis of aircraft pressure relief doors
NASA Astrophysics Data System (ADS)
Schott, Tyler
Modern trends in commercial aircraft design have sought to improve fuel efficiency while reducing emissions by operating at higher pressures and temperatures than ever before. Consequently, greater demands are placed on the auxiliary bleed air systems used for a multitude of aircraft operations. The increased role of bleed air systems poses significant challenges for the pressure relief system to ensure the safe and reliable operation of the aircraft. The core compartment pressure relief door (PRD) is an essential component of the pressure relief system which functions to relieve internal pressure in the core casing of a high-bypass turbofan engine during a burst duct over-pressurization event. The successful modeling and analysis of a burst duct event are imperative to the design and development of PRD's to ensure that they will meet the increased demands placed on the pressure relief system. Leveraging high-performance computing coupled with advances in computational analysis, this thesis focuses on a comprehensive computational fluid dynamics (CFD) study to characterize turbulent flow dynamics and quantify the performance of a core compartment PRD across a range of operating conditions and geometric configurations. The CFD analysis was based on a compressible, steady-state, three-dimensional, Reynolds-averaged Navier-Stokes approach. Simulations were analyzed, and results show that variations in freestream conditions, plenum environment, and geometric configurations have a non-linear impact on the discharge, moment, thrust, and surface temperature characteristics. The CFD study revealed that the underlying physics for this behavior is explained by the interaction of vortices, jets, and shockwaves. This thesis research is innovative and provides a comprehensive and detailed analysis of existing and novel PRD geometries over a range of realistic operating conditions representative of a burst duct over-pressurization event. Further, the study provides aircraft manufacturers with valuable insight into the impact that operating conditions and geometric configurations have on PRD performance and how the information can be used to assist future research and development of PRD design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.
2016-04-15
Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less
Kelly, Jack; Knottenbelt, William
2015-01-01
Many countries are rolling out smart electricity meters. These measure a home’s total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the ‘ground truth’ demand of individual appliances. In this context, we present UK-DALE: an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses, one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate. We also describe the low-cost, open-source, wireless system we built for collecting our dataset. PMID:25984347
NASA Astrophysics Data System (ADS)
Kelly, Jack; Knottenbelt, William
2015-03-01
Many countries are rolling out smart electricity meters. These measure a home’s total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the ‘ground truth’ demand of individual appliances. In this context, we present UK-DALE: an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses, one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate. We also describe the low-cost, open-source, wireless system we built for collecting our dataset.
Reduced complexity structural modeling for automated airframe synthesis
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1987-01-01
A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.
Interaction sorting method for molecular dynamics on multi-core SIMD CPU architecture.
Matvienko, Sergey; Alemasov, Nikolay; Fomin, Eduard
2015-02-01
Molecular dynamics (MD) is widely used in computational biology for studying binding mechanisms of molecules, molecular transport, conformational transitions, protein folding, etc. The method is computationally expensive; thus, the demand for the development of novel, much more efficient algorithms is still high. Therefore, the new algorithm designed in 2007 and called interaction sorting (IS) clearly attracted interest, as it outperformed the most efficient MD algorithms. In this work, a new IS modification is proposed which allows the algorithm to utilize SIMD processor instructions. This paper shows that the improvement provides an additional gain in performance, 9% to 45% in comparison to the original IS method.
Eye-related pain induced by visually demanding computer work.
Thorud, Hanne-Mari Schiøtz; Helland, Magne; Aarås, Arne; Kvikstad, Tor Martin; Lindberg, Lars Göran; Horgen, Gunnar
2012-04-01
Eye strain during visually demanding computer work may include glare and increased squinting. The latter may be related to elevated tension in the orbicularis oculi muscle and development of muscle pain. The aim of the study was to investigate the development of discomfort symptoms in relation to muscle activity and muscle blood flow in the orbicularis oculi muscle during computer work with visual strain. A group of healthy young adults with normal vision was randomly selected. Eye-related symptoms were recorded during a 2-h working session on a laptop. The participants were exposed to visual stressors such as glare and small font. Muscle load and blood flow were measured by electromyography and photoplethysmography, respectively. During 2 h of visually demanding computer work, there was a significant increase in the following symptoms: eye-related pain and tiredness, blurred vision, itchiness, gritty eyes, photophobia, dry eyes, and tearing eyes. Muscle load in orbicularis oculi was significantly increased above baseline and stable at 1 to 1.5% maximal voluntary contraction during the working sessions. Orbicularis oculi muscle blood flow increased significantly during the first part of the working sessions before returning to baseline. There were significant positive correlations between eye-related tiredness and orbicularis oculi muscle load and eye-related pain and muscle blood flow. Subjects who developed eye-related pain showed elevated orbicularis oculi muscle blood flow during computer work, but no differences in muscle load, compared with subjects with minimal pain symptoms. Eyestrain during visually demanding computer work is related to the orbicularis oculi muscle. Muscle pain development during demanding, low-force exercise is associated with increased muscle blood flow, possible secondary to different muscle activity pattern, and/or increased mental stress level in subjects experiencing pain compared with subjects with minimal pain.
A Course on Reconfigurable Processors
ERIC Educational Resources Information Center
Shoufan, Abdulhadi; Huss, Sorin A.
2010-01-01
Reconfigurable computing is an established field in computer science. Teaching this field to computer science students demands special attention due to limited student experience in electronics and digital system design. This article presents a compact course on reconfigurable processors, which was offered at the Technische Universitat Darmstadt,…
Multi-Dimensional Optimization for Cloud Based Multi-Tier Applications
ERIC Educational Resources Information Center
Jung, Gueyoung
2010-01-01
Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these…
High performance hybrid functional Petri net simulations of biological pathway models on CUDA.
Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.
NASA Astrophysics Data System (ADS)
Tripathi, Vijay S.; Yeh, G. T.
1993-06-01
Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.
Addressing the minimum fleet problem in on-demand urban mobility.
Vazifeh, M M; Santi, P; Resta, G; Strogatz, S H; Ratti, C
2018-05-01
Information and communication technologies have opened the way to new solutions for urban mobility that provide better ways to match individuals with on-demand vehicles. However, a fundamental unsolved problem is how best to size and operate a fleet of vehicles, given a certain demand for personal mobility. Previous studies 1-5 either do not provide a scalable solution or require changes in human attitudes towards mobility. Here we provide a network-based solution to the following 'minimum fleet problem', given a collection of trips (specified by origin, destination and start time), of how to determine the minimum number of vehicles needed to serve all the trips without incurring any delay to the passengers. By introducing the notion of a 'vehicle-sharing network', we present an optimal computationally efficient solution to the problem, as well as a nearly optimal solution amenable to real-time implementation. We test both solutions on a dataset of 150 million taxi trips taken in the city of New York over one year 6 . The real-time implementation of the method with near-optimal service levels allows a 30 per cent reduction in fleet size compared to current taxi operation. Although constraints on driver availability and the existence of abnormal trip demands may lead to a relatively larger optimal value for the fleet size than that predicted here, the fleet size remains robust for a wide range of variations in historical trip demand. These predicted reductions in fleet size follow directly from a reorganization of taxi dispatching that could be implemented with a simple urban app; they do not assume ride sharing 7-9 , nor require changes to regulations, business models, or human attitudes towards mobility to become effective. Our results could become even more relevant in the years ahead as fleets of networked, self-driving cars become commonplace 10-14 .
The "Magic" of Wireless Access in the Library
ERIC Educational Resources Information Center
Balas, Janet L.
2006-01-01
It seems that the demand for public access computers grows exponentially every time a library network is expanded, making it impossible to ever have enough computers available for patrons. One solution that many libraries are implementing to ease the demand for public computer use is to offer wireless technology that allows patrons to bring in…
On modelling three-dimensional piezoelectric smart structures with boundary spectral element method
NASA Astrophysics Data System (ADS)
Zou, Fangxin; Aliabadi, M. H.
2017-05-01
The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.
The direction of cloud computing for Malaysian education sector in 21st century
NASA Astrophysics Data System (ADS)
Jaafar, Jazurainifariza; Rahman, M. Nordin A.; Kadir, M. Fadzil A.; Shamsudin, Syadiah Nor; Saany, Syarilla Iryani A.
2017-08-01
In 21st century, technology has turned learning environment into a new way of education to make learning systems more effective and systematic. Nowadays, education institutions are faced many challenges to ensure the teaching and learning process is running smoothly and manageable. Some of challenges in the current education management are lack of integrated systems, high cost of maintenance, difficulty of configuration and deployment as well as complexity of storage provision. Digital learning is an instructional practice that use technology to make learning experience more effective, provides education process more systematic and attractive. Digital learning can be considered as one of the prominent application that implemented under cloud computing environment. Cloud computing is a type of network resources that provides on-demands services where the users can access applications inside it at any location and no time border. It also promises for minimizing the cost of maintenance and provides a flexible of data storage capacity. The aim of this article is to review the definition and types of cloud computing for improving digital learning management as required in the 21st century education. The analysis of digital learning context focused on primary school in Malaysia. Types of cloud applications and services in education sector are also discussed in the article. Finally, gap analysis and direction of cloud computing in education sector for facing the 21st century challenges are suggested.
Two-way cable television project
NASA Astrophysics Data System (ADS)
Wilkens, H.; Guenther, P.; Kiel, F.; Kraus, F.; Mahnkopf, P.; Schnee, R.
1982-02-01
The market demand for a multiuser computer system with interactive services was studied. Mean system work load at peak use hours was estimated and the complexity of dialog with a central computer was determined. Man machine communication by broadband cable television transmission, using digital techniques, was assumed. The end to end system is described. It is user friendly, able to handle 10,000 subscribers, and provides color television display. The central computer system architecture with remote audiovisual terminals is depicted and software is explained. Signal transmission requirements are dealt with. International availability of the test system, including sample programs, is indicated.
Wan, Yue; Yang, Hongwei; Masui, Toshihiko
2005-01-01
At the present time, ambient air pollution is a serious public health problem in China. Based on the concentration-response relationship provided by international and domestic epidemiologic studies, the authors estimated the mortality and morbidity induced by the ambient air pollution of 2000. To address the mechanism of the health impact on the national economy, the authors applied a computable general equilibrium (CGE) model, named AIM/Material China, containing 39 production sectors and 32 commodities. AIM/Material analyzes changes of the gross domestic product (GDP), final demand, and production activity originating from health damages. If ambient air quality met Grade II of China's air quality standard in 2000, then the avoidable GDP loss would be 0.38%o of the national total, of which 95% was led by labor loss. Comparatively, medical expenditure had less impact on national economy, which is explained from the aspect of the final demand by commodities and the production activities by sectors. The authors conclude that the CGE model is a suitable tool for assessing health impacts from a point of view of national economy through the discussion about its applicability.
Gigaflop architecture, a hardware perspective
NASA Technical Reports Server (NTRS)
Feierbach, G. F.
1978-01-01
Any super computer built in the early 1980s will use components that are available by fall 1978. The architecture of such a system cannot depart radically from current super computers if the software experience painfully acquired from these computers in the 70's is to apply. Given the above constraints, 10 billion floating point operations per second (BFLOPS) are attainable and a problem memory of 512 million (64 bit) words could be supported by the technology of the time. In contrast to this, industry is likely to respond with commercially available machines with a performance of less than 150 MFLOPS. This is due to self-imposed constraints on the manufacturers to provide upward compatible architectures (same instruction set) and systems which can be sold in significant volumes. Since this computing speed is inadequate to meet the demands of computational fluid dynamics, a special processor is required. Issues which are felt to be significant in the pursuit of maximum compute capability in this special processor are discussed.
An Interactive Computer Tool for Teaching About Desalination and Managing Water Demand in the US
NASA Astrophysics Data System (ADS)
Ziolkowska, J. R.; Reyes, R.
2016-12-01
This paper presents an interactive tool to geospatially and temporally analyze desalination developments and trends in the US in the time span 1950-2013, its current contribution to satisfying water demands and its future potentials. The computer tool is open access and can be used by any user with Internet connection, thus facilitating interactive learning about water resources. The tool can also be used by stakeholders and policy makers for decision-making support and with designing sustainable water management strategies. Desalination technology has been acknowledged as a solution to a sustainable water demand management stemming from many sectors, including municipalities, industry, agriculture, power generation, and other users. Desalination has been applied successfully in the US and many countries around the world since 1950s. As of 2013, around 1,336 desalination plants were operating in the US alone, with a daily production capacity of 2 BGD (billion gallons per day) (GWI, 2013). Despite a steady increase in the number of new desalination plants and growing production capacity, in many regions, the costs of desalination are still prohibitive. At the same time, the technology offers a tremendous potential for `enormous supply expansion that exceeds all likely demands' (Chowdhury et al., 2013). The model and tool are based on data from Global Water Intelligence (GWI, 2013). The analysis shows that more than 90% of all the plants in the US are small-scale plants with the capacity below 4.31 MGD. Most of the plants (and especially larger plants) are located on the US East Coast, as well as in California, Texas, Oklahoma, and Florida. The models and the tool provide information about economic feasibility of potential new desalination plants based on the access to feed water, energy sources, water demand, and experiences of other plants in that region.
Battery resource assessment. Battery demands scenarios materials
NASA Astrophysics Data System (ADS)
Sullivan, D.
1980-12-01
Projections of demand for batteries and battery materials between 1980 and 2000 are presented. The estimates are based on existing predictions for the future of the electric vehicle, photovoltaic, utility load-leveling, and existing battery industry. Battery demand was first computed as kilowatt-hours of storage for various types of batteries. Using estimates for the materials required for each battery, the maximum demand that could be expected for each battery material was determined.
Supply and demand for radiographers in Lithuania: a prognosis for 2012-2030.
Vanckaviciene, Aurika; Starkiene, Liudvika; Macijauskiene, Jūrate
2014-07-01
This is the first ever study on the planning of the supply and demand for radiographers in Lithuania. The aim of this study was to analyze the supply and demand for radiographers in the labor market with respect to their number, structure, and services, and to provide a prognosis for the period of 2012-2030. Supply was calculated using two scenarios with differing duration of studies, annual student drop-out rates, rates of failure to start working, the annual number of new entrants into the labor market, and emigration rates. Annual mortality rates, the number of first-year students, and retirement rates were evaluated equally in both scenarios. Two projections of the demand for radiographers, based on the population's differing (by age and gender), need for outpatient radiology services, computed tomography, and magnetic resonance scans. Subsequently, the supply and demand scenarios were compared. Evaluation of the perspective supply and demand scenarios - which are the most probable - revealed a gap forming during the analyzed period, the predicted specialist shortage will reach 0.13 full-time equivalents per 10,000 population, and in 2030-0.37 full-time equivalents per 10,000 population. Considering the changes in education of radiographers, the socio-demographic characteristics of the staff, and the increasing need for radiographers' services, the supply of radiographers during the next two decades will be insufficient. To meet the forecasted demand for radiographers in the perspective scenario, the number of students choosing this specialty from 2013 on should increase by up to 30%. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.
ERIC Educational Resources Information Center
Parkland Coll., Champaign, IL.
A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.
When Does Model-Based Control Pay Off?
2016-01-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to “model-free” and “model-based” strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand. PMID:27564094
When Does Model-Based Control Pay Off?
Kool, Wouter; Cushman, Fiery A; Gershman, Samuel J
2016-08-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.
Bradley, Beverly D.; Howie, Stephen R. C.; Chan, Timothy C. Y.; Cheng, Yu-Ling
2014-01-01
Background Planning for the reliable and cost-effective supply of a health service commodity such as medical oxygen requires an understanding of the dynamic need or ‘demand’ for the commodity over time. In developing country health systems, however, collecting longitudinal clinical data for forecasting purposes is very difficult. Furthermore, approaches to estimating demand for supplies based on annual averages can underestimate demand some of the time by missing temporal variability. Methods A discrete event simulation model was developed to estimate variable demand for a health service commodity using the important example of medical oxygen for childhood pneumonia. The model is based on five key factors affecting oxygen demand: annual pneumonia admission rate, hypoxaemia prevalence, degree of seasonality, treatment duration, and oxygen flow rate. These parameters were varied over a wide range of values to generate simulation results for different settings. Total oxygen volume, peak patient load, and hours spent above average-based demand estimates were computed for both low and high seasons. Findings Oxygen demand estimates based on annual average values of demand factors can often severely underestimate actual demand. For scenarios with high hypoxaemia prevalence and degree of seasonality, demand can exceed average levels up to 68% of the time. Even for typical scenarios, demand may exceed three times the average level for several hours per day. Peak patient load is sensitive to hypoxaemia prevalence, whereas time spent at such peak loads is strongly influenced by degree of seasonality. Conclusion A theoretical study is presented whereby a simulation approach to estimating oxygen demand is used to better capture temporal variability compared to standard average-based approaches. This approach provides better grounds for health service planning, including decision-making around technologies for oxygen delivery. Beyond oxygen, this approach is widely applicable to other areas of resource and technology planning in developing country health systems. PMID:24587089
Galaxy CloudMan: delivering cloud compute clusters.
Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James
2010-12-21
Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.
Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L
2008-01-15
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.
Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.
2007-01-01
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812
Estimation of conformational entropy in protein-ligand interactions: a computational perspective.
Polyansky, Anton A; Zubac, Ruben; Zagrovic, Bojan
2012-01-01
Conformational entropy is an important component of the change in free energy upon binding of a ligand to its target protein. As a consequence, development of computational techniques for reliable estimation of conformational entropies is currently receiving an increased level of attention in the context of computational drug design. Here, we review the most commonly used techniques for conformational entropy estimation from classical molecular dynamics simulations. Although by-and-large still not directly used in practical drug design, these techniques provide a golden standard for developing other, computationally less-demanding methods for such applications, in addition to furthering our understanding of protein-ligand interactions in general. In particular, we focus on the quasi-harmonic approximation and discuss different approaches that can be used to go beyond it, most notably, when it comes to treating anharmonic and/or correlated motions. In addition to reviewing basic theoretical formalisms, we provide a concrete set of steps required to successfully calculate conformational entropy from molecular dynamics simulations, as well as discuss a number of practical issues that may arise in such calculations.
Task Decomposition Model for Dispatchers in Dynamic Scheduling of Demand Responsive Transit Systems
DOT National Transportation Integrated Search
2000-06-01
Since the passage of ADA, the demand for paratransit service is steadily increasing. Paratransit companies are relying on computer automation to streamline dispatch operations, increase productivity and reduce operator stress and error. Little resear...
2012-01-01
Background Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. Results In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Conclusions Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org. PMID:23281941
El-Kalioby, Mohamed; Abouelhoda, Mohamed; Krüger, Jan; Giegerich, Robert; Sczyrba, Alexander; Wall, Dennis P; Tonellato, Peter
2012-01-01
Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org.
Considerations for Future Climate Data Stewardship
NASA Astrophysics Data System (ADS)
Halem, M.; Nguyen, P. T.; Chapman, D. R.
2009-12-01
In this talk, we will describe the lessons learned based on processing and generating a decade of gridded AIRS and MODIS IR sounding data. We describe the challenges faced in accessing and sharing very large data sets, maintaining data provenance under evolving technologies, obtaining access to legacy calibration data and the permanent preservation of Earth science data records for on demand services. These lessons suggest a new approach to data stewardship will be required for the next decade of hyper spectral instruments combined with cloud resolving models. It will not be sufficient for stewards of future data centers to just provide the public with access to archived data but our experience indicates that data needs to reside close to computers with ultra large disc farms and tens of thousands of processors to deliver complex services on demand over very high speed networks much like the offerings of search engines today. Over the first decade of the 21st century, petabyte data records were acquired from the AIRS instrument on Aqua and the MODIS instrument on Aqua and Terra. NOAA data centers also maintain petabytes of operational IR sounders collected over the past four decades. The UMBC Multicore Computational Center (MC2) developed a Service Oriented Atmospheric Radiance gridding system (SOAR) to allow users to select IR sounding instruments from multiple archives and choose space-time- spectral periods of Level 1B data to download, grid, visualize and analyze on demand. Providing this service requires high data rate bandwidth access to the on line disks at Goddard. After 10 years, cost effective disk storage technology finally caught up with the MODIS data volume making it possible for Level 1B MODIS data to be available on line. However, 10Ge fiber optic networks to access large volumes of data are still not available from CSFC to serve the broader community. Data transfer rates are well below 10MB/s limiting their usefulness for climate studies. During this decade, processor performance hit a power wall leading computer vendors to design multicore processor chips. High performance computer systems obtained petaflop performance by clustering tens of thousands of multicore processor chips. Thus, power consumption and autonomic recovery from processor and disc failures have become major cost and technical considerations for future data archives. To address these new architecture requirements, a transparent parallel programming paradigm, the Hadoop MapReduce cloud computing system, became available as an open S/W system. In addition, the Hadoop File System and manages the distribution of data to these processors as well as backs up the processing in the event of any processor or disc failure. However, to employ this paradigm, the data needs to be stored on the computer system. We conclude this talk with a climate data preservation approach that addresses the scalability crisis to exabyte data requirements for the next decade based on projections of processor, disc data density and bandwidth doubling rates.
Efficient operating system level virtualization techniques for cloud resources
NASA Astrophysics Data System (ADS)
Ansu, R.; Samiksha; Anju, S.; Singh, K. John
2017-11-01
Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.
NASA Technical Reports Server (NTRS)
Clukey, Steven J.
1991-01-01
The real time Dynamic Data Acquisition and Processing System (DDAPS) is described which provides the capability for the simultaneous measurement of velocity, density, and total temperature fluctuations. The system of hardware and software is described in context of the wind tunnel environment. The DDAPS replaces both a recording mechanism and a separate data processing system. DDAPS receives input from hot wire anemometers. Amplifiers and filters condition the signals with computer controlled modules. The analog signals are simultaneously digitized and digitally recorded on disk. Automatic acquisition collects necessary calibration and environment data. Hot wire sensitivities are generated and applied to the hot wire data to compute fluctuations. The presentation of the raw and processed data is accomplished on demand. The interface to DDAPS is described along with the internal mechanisms of DDAPS. A summary of operations relevant to the use of the DDAPS is also provided.
Introducing Cloud Computing Topics in Curricula
ERIC Educational Resources Information Center
Chen, Ling; Liu, Yang; Gallagher, Marcus; Pailthorpe, Bernard; Sadiq, Shazia; Shen, Heng Tao; Li, Xue
2012-01-01
The demand for graduates with exposure in Cloud Computing is on the rise. For many educational institutions, the challenge is to decide on how to incorporate appropriate cloud-based technologies into their curricula. In this paper, we describe our design and experiences of integrating Cloud Computing components into seven third/fourth-year…
Crops in silico: A community wide multi-scale computational modeling framework of plant canopies
NASA Astrophysics Data System (ADS)
Srinivasan, V.; Christensen, A.; Borkiewic, K.; Yiwen, X.; Ellis, A.; Panneerselvam, B.; Kannan, K.; Shrivastava, S.; Cox, D.; Hart, J.; Marshall-Colon, A.; Long, S.
2016-12-01
Current crop models predict a looming gap between supply and demand for primary foodstuffs over the next 100 years. While significant yield increases were achieved in major food crops during the early years of the green revolution, the current rates of yield increases are insufficient to meet future projected food demand. Furthermore, with projected reduction in arable land, decrease in water availability, and increasing impacts of climate change on future food production, innovative technologies are required to sustainably improve crop yield. To meet these challenges, we are developing Crops in silico (Cis), a biologically informed, multi-scale, computational modeling framework that can facilitate whole plant simulations of crop systems. The Cis framework is capable of linking models of gene networks, protein synthesis, metabolic pathways, physiology, growth, and development in order to investigate crop response to different climate scenarios and resource constraints. This modeling framework will provide the mechanistic details to generate testable hypotheses toward accelerating directed breeding and engineering efforts to increase future food security. A primary objective for building such a framework is to create synergy among an inter-connected community of biologists and modelers to create a realistic virtual plant. This framework advantageously casts the detailed mechanistic understanding of individual plant processes across various scales in a common scalable framework that makes use of current advances in high performance and parallel computing. We are currently designing a user friendly interface that will make this tool equally accessible to biologists and computer scientists. Critically, this framework will provide the community with much needed tools for guiding future crop breeding and engineering, understanding the emergent implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment.
A 3D staggered-grid finite difference scheme for poroelastic wave equation
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai
2014-10-01
Three dimensional numerical modeling has been a viable tool for understanding wave propagation in real media. The poroelastic media can better describe the phenomena of hydrocarbon reservoirs than acoustic and elastic media. However, the numerical modeling in 3D poroelastic media demands significantly more computational capacity, including both computational time and memory. In this paper, we present a 3D poroelastic staggered-grid finite difference (SFD) scheme. During the procedure, parallel computing is implemented to reduce the computational time. Parallelization is based on domain decomposition, and communication between processors is performed using message passing interface (MPI). Parallel analysis shows that the parallelized SFD scheme significantly improves the simulation efficiency and 3D decomposition in domain is the most efficient. We also analyze the numerical dispersion and stability condition of the 3D poroelastic SFD method. Numerical results show that the 3D numerical simulation can provide a real description of wave propagation.
NASA Astrophysics Data System (ADS)
Brookshire, D. S.; Coursey, D.; Dimint, A.; Tidwell, V.
2004-12-01
Since 1950, the demand for water has more than doubled in the United States. Historically, growing demands have been met by increasing reservoir capacity and through groundwater mining, often at the expense of environmental and cultural concerns. The future is expected to hold much the same. Demand for water will continue to increase particularly in response to the expanding urban sector, while growing concerns over the environment are prompting interest in allocating more water for in-stream uses. So, where will this water come from? Virtually all water supplies are allocated. Providing for new uses requires a reduction in the amount of water dedicated to existing uses. The water banking/leasing model is formulated within a system dynamics context using the object oriented commercial software package, Powersimä Studio 2003. System dynamics provides a unique mathematical framework for integrating the natural and social processes important to managing natural resources and can provide an interactive interface for engaging the public in the decision process. These system level models focus on capturing the broad structure of the system, specifically the feedback and time delays between interacting subsystems. The spatially aggregated models are computationally efficient allowing simulations to be conducted on a PC in a matter of seconds to minutes. By employing interactive interfaces, these models can be taken directly to the public or decision maker. To demonstrate the water banking/leasing model, application has been made to potential markets on the Rio Grande. Specifically, the model spans the reach between Elephant Butte Reservoir (central New Mexico) and the New Mexico/Texas state line. Primary sectors in the model include climate, surface and groundwater, riparian and aquatic habitat, watershed processes, water quality, water demand (residential, commercial, industrial, institution, and agricultural), economics, policy, and legal institutions. Within the model the basin is divided into four distinct but interacting reaches and a monthly time-step is employed. River operations and water demand trends have been calibrated to historical data.
Multi-hop routing mechanism for reliable sensor computing.
Chen, Jiann-Liang; Ma, Yi-Wei; Lai, Chia-Ping; Hu, Chia-Cheng; Huang, Yueh-Min
2009-01-01
Current research on routing in wireless sensor computing concentrates on increasing the service lifetime, enabling scalability for large number of sensors and supporting fault tolerance for battery exhaustion and broken nodes. A sensor node is naturally exposed to various sources of unreliable communication channels and node failures. Sensor nodes have many failure modes, and each failure degrades the network performance. This work develops a novel mechanism, called Reliable Routing Mechanism (RRM), based on a hybrid cluster-based routing protocol to specify the best reliable routing path for sensor computing. Table-driven intra-cluster routing and on-demand inter-cluster routing are combined by changing the relationship between clusters for sensor computing. Applying a reliable routing mechanism in sensor computing can improve routing reliability, maintain low packet loss, minimize management overhead and save energy consumption. Simulation results indicate that the reliability of the proposed RRM mechanism is around 25% higher than that of the Dynamic Source Routing (DSR) and ad hoc On-demand Distance Vector routing (AODV) mechanisms.
NASA Astrophysics Data System (ADS)
Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.
2012-10-01
Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.
Multi-Attribute Task Battery - Applications in pilot workload and strategic behavior research
NASA Technical Reports Server (NTRS)
Arnegard, Ruth J.; Comstock, J. R., Jr.
1991-01-01
The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.
The multi-attribute task battery for human operator workload and strategic behavior research
NASA Technical Reports Server (NTRS)
Comstock, J. Raymond, Jr.; Arnegard, Ruth J.
1992-01-01
The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to use nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2017-01-01
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1994-01-01
SAMSAN was developed to aid the control system analyst by providing a self consistent set of computer algorithms that support large order control system design and evaluation studies, with an emphasis placed on sampled system analysis. Control system analysts have access to a vast array of published algorithms to solve an equally large spectrum of controls related computational problems. The analyst usually spends considerable time and effort bringing these published algorithms to an integrated operational status and often finds them less general than desired. SAMSAN reduces the burden on the analyst by providing a set of algorithms that have been well tested and documented, and that can be readily integrated for solving control system problems. Algorithm selection for SAMSAN has been biased toward numerical accuracy for large order systems with computational speed and portability being considered important but not paramount. In addition to containing relevant subroutines from EISPAK for eigen-analysis and from LINPAK for the solution of linear systems and related problems, SAMSAN contains the following not so generally available capabilities: 1) Reduction of a real non-symmetric matrix to block diagonal form via a real similarity transformation matrix which is well conditioned with respect to inversion, 2) Solution of the generalized eigenvalue problem with balancing and grading, 3) Computation of all zeros of the determinant of a matrix of polynomials, 4) Matrix exponentiation and the evaluation of integrals involving the matrix exponential, with option to first block diagonalize, 5) Root locus and frequency response for single variable transfer functions in the S, Z, and W domains, 6) Several methods of computing zeros for linear systems, and 7) The ability to generate documentation "on demand". All matrix operations in the SAMSAN algorithms assume non-symmetric matrices with real double precision elements. There is no fixed size limit on any matrix in any SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.
Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven
2010-11-01
The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.
Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?
Madhyastha, Tara M.; Koh, Natalie; Day, Trevor K. M.; Hernández-Fernández, Moises; Kelley, Austin; Peterson, Daniel J.; Rajan, Sabreena; Woelfer, Karl A.; Wolf, Jonathan; Grabowski, Thomas J.
2017-01-01
The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS) to execute neuroimaging workflows “in the cloud.” Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster. PMID:29163119
Raskovic, Dejan; Giessel, David
2009-11-01
The goal of the study presented in this paper is to develop an embedded biomedical system capable of delivering maximum performance on demand, while maintaining the optimal energy efficiency whenever possible. Several hardware and software solutions are presented allowing the system to intelligently change the power supply voltage and frequency in runtime. The resulting system allows use of more energy-efficient components, operates most of the time in its most battery-efficient mode, and provides means to quickly change the operation mode while maintaining reliable performance. While all of these techniques extend battery life, the main benefit is on-demand availability of computational performance using a system that is not excessive. Biomedical applications, perhaps more than any other application, require battery operation, favor infrequent battery replacements, and can benefit from increased performance under certain conditions (e.g., when anomaly is detected) that makes them ideal candidates for this approach. In addition, if the system is a part of a body area network, it needs to be light, inexpensive, and adaptable enough to satisfy changing requirements of the other nodes in the network.
ERIC Educational Resources Information Center
Carlin, Anna; Manson, Daniel P.; Zhu, Jake
2010-01-01
With the projected higher demand for Network Systems Analysts and increasing computer crime, network security specialists are an organization's first line of defense. The principle function of this paper is to provide the evolution of Collegiate Cyber Defense Competitions (CCDC), event planning required, soliciting sponsors, recruiting personnel…
Bandwidth reduction for video-on-demand broadcasting using secondary content insertion
NASA Astrophysics Data System (ADS)
Golynski, Alexander; Lopez-Ortiz, Alejandro; Poirier, Guillaume; Quimper, Claude-Guy
2005-01-01
An optimal broadcasting scheme under the presence of secondary content (i.e. advertisements) is proposed. The proposed scheme works both for movies encoded in a Constant Bit Rate (CBR) or a Variable Bit Rate (VBR) format. It is shown experimentally that secondary content in movies can make Video-on-Demand (VoD) broadcasting systems more efficient. An efficient algorithm is given to compute the optimal broadcasting schedule with secondary content, which in particular significantly improves over the best previously known algorithm for computing the optimal broadcasting schedule without secondary content.
Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L.; Moya, Jose M.; Risco-Martín, José L.
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. PMID:23112621
Ubiquitous green computing techniques for high demand applications in Smart environments.
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
Atomistic Modeling of Nanostructures via the BFS Quantum Approximate Method
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Farias, D.
2003-01-01
Ideally, computational modeling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nano-strucured systems. A quantum approximate technique, the BFS method for alloys, which attempts to meet these demands, is introduced for the calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multi-phase precipitation in a five element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(III) substrate.
Functional integration of vertical flight path and speed control using energy principles
NASA Technical Reports Server (NTRS)
Lambregts, A. A.
1984-01-01
A generalized automatic flight control system was developed which integrates all longitudinal flight path and speed control functions previously provided by a pitch autopilot and autothrottle. In this design, a net thrust command is computed based on total energy demand arising from both flight path and speed targets. The elevator command is computed based on the energy distribution error between flight path and speed. The engine control is configured to produce the commanded net thrust. The design incorporates control strategies and hierarchy to deal systematically and effectively with all aircraft operational requirements, control nonlinearities, and performance limits. Consistent decoupled maneuver control is achieved for all modes and flight conditions without outer loop gain schedules, control law submodes, or control function duplication.
Galaxy CloudMan: delivering cloud compute clusters
2010-01-01
Background Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is “cloud computing”, which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate “as is” use by experimental biologists. Results We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon’s EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. Conclusions The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge. PMID:21210983
Towards Real Information on Demand.
ERIC Educational Resources Information Center
Barker, Philip
The phrase "information on demand" is often used to describe situations in which digital electronic information can be delivered to particular points of need at times and in ways that are determined by the specific requirements of individual consumers or client groups. The advent of "mobile" computing equipment now makes the…
Griffiths, Karin Lindgren; Mackey, Martin G; Adamson, Barbara J
2011-12-01
The purpose of this study was to identify and compare individual behavioral and psychophysiological responses to workload demands and stressors associated with the reporting of musculoskeletal symptoms with computer work. Evidence is growing that the prevalence of musculoskeletal symptoms increases with longer hours of computer work and exposure to psychosocial stressors such as high workloads and unrealistic deadlines. Workstyle, or how an individual worker behaves in response to such work demands, may also be an important factor associated with musculoskeletal symptoms in computer operators. Approximately 8,000 employees of the Australian Public Service were invited to complete an on-line survey if they worked with a computer for 15 or more hours per week. The survey was a composite of three questionnaires: the ASSET to measure perceived organizational stressors, Nordic Musculoskeletal Questionnaire to measure reported prevalence of musculoskeletal symptoms and additional questions to measure individual work behaviors and responses. 934 completed surveys were accepted for analyses. Logistic regression was used to identify significant behavioral and work response predictors of musculoskeletal symptoms. Reporting of heightened muscle tension in response to workload pressure was more strongly associated, than other physical behavioral factors, with musculoskeletal symptoms for all body areas, particularly the neck (OR = 2.50, 95% CI: 2.09-2.99). Individual workstyles in response to workload demands and stressors, including working with heightened muscle tension and mental fatigue, were significantly associated with musculoskeletal symptoms. Future risk management strategies should have a greater focus on the identification and management of those organizational factors that are likely to encourage and exacerbate adverse workstyles.
NASA Astrophysics Data System (ADS)
Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim
2018-06-01
The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.
Grids, Clouds, and Virtualization
NASA Astrophysics Data System (ADS)
Cafaro, Massimo; Aloisio, Giovanni
This chapter introduces and puts in context Grids, Clouds, and Virtualization. Grids promised to deliver computing power on demand. However, despite a decade of active research, no viable commercial grid computing provider has emerged. On the other hand, it is widely believed - especially in the Business World - that HPC will eventually become a commodity. Just as some commercial consumers of electricity have mission requirements that necessitate they generate their own power, some consumers of computational resources will continue to need to provision their own supercomputers. Clouds are a recent business-oriented development with the potential to render this eventually as rare as organizations that generate their own electricity today, even among institutions who currently consider themselves the unassailable elite of the HPC business. Finally, Virtualization is one of the key technologies enabling many different Clouds. We begin with a brief history in order to put them in context, and recall the basic principles and concepts underlying and clearly differentiating them. A thorough overview and survey of existing technologies provides the basis to delve into details as the reader progresses through the book.
CAD Services: an Industry Standard Interface for Mechanical CAD Interoperability
NASA Technical Reports Server (NTRS)
Claus, Russell; Weitzer, Ilan
2002-01-01
Most organizations seek to design and develop new products in increasingly shorter time periods. At the same time, increased performance demands require a team-based multidisciplinary design process that may span several organizations. One approach to meet these demands is to use 'Geometry Centric' design. In this approach, design engineers team their efforts through one united representation of the design that is usually captured in a CAD system. Standards-based interfaces are critical to provide uniform, simple, distributed services that enable the 'Geometry Centric' design approach. This paper describes an industry-wide effort, under the Object Management Group's (OMG) Manufacturing Domain Task Force, to define interfaces that enable the interoperability of CAD, Computer Aided Manufacturing (CAM), and Computer Aided Engineering (CAE) tools. This critical link to enable 'Geometry Centric' design is called: Cad Services V1.0. This paper discusses the features of this standard and proposed application.
An Information Theoretic Characterisation of Auditory Encoding
Overath, Tobias; Cusack, Rhodri; Kumar, Sukhbinder; von Kriegstein, Katharina; Warren, Jason D; Grube, Manon; Carlyon, Robert P; Griffiths, Timothy D
2007-01-01
The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT). In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content. PMID:17958472
Computer Training for Entrepreneurial Meteorologists.
NASA Astrophysics Data System (ADS)
Koval, Joseph P.; Young, George S.
2001-05-01
Computer applications of increasing diversity form a growing part of the undergraduate education of meteorologists in the early twenty-first century. The advent of the Internet economy, as well as a waning demand for traditional forecasters brought about by better numerical models and statistical forecasting techniques has greatly increased the need for operational and commercial meteorologists to acquire computer skills beyond the traditional techniques of numerical analysis and applied statistics. Specifically, students with the skills to develop data distribution products are in high demand in the private sector job market. Meeting these demands requires greater breadth, depth, and efficiency in computer instruction. The authors suggest that computer instruction for undergraduate meteorologists should include three key elements: a data distribution focus, emphasis on the techniques required to learn computer programming on an as-needed basis, and a project orientation to promote management skills and support student morale. In an exploration of this approach, the authors have reinvented the Applications of Computers to Meteorology course in the Department of Meteorology at The Pennsylvania State University to teach computer programming within the framework of an Internet product development cycle. Because the computer skills required for data distribution programming change rapidly, specific languages are valuable for only a limited time. A key goal of this course was therefore to help students learn how to retrain efficiently as technologies evolve. The crux of the course was a semester-long project during which students developed an Internet data distribution product. As project management skills are also important in the job market, the course teamed students in groups of four for this product development project. The success, failures, and lessons learned from this experiment are discussed and conclusions drawn concerning undergraduate instructional methods for computer applications in meteorology.
Mazur, Lukasz M; Mosaly, Prithima R; Moore, Carlton; Comitz, Elizabeth; Yu, Fei; Falchook, Aaron D; Eblan, Michael J; Hoyle, Lesley M; Tracton, Gregg; Chera, Bhishamjit S; Marks, Lawrence B
2016-11-01
To assess the relationship between (1) task demands and workload, (2) task demands and performance, and (3) workload and performance, all during physician-computer interactions in a simulated environment. Two experiments were performed in 2 different electronic medical record (EMR) environments: WebCIS (n = 12) and Epic (n = 17). Each participant was instructed to complete a set of prespecified tasks on 3 routine clinical EMR-based scenarios: urinary tract infection (UTI), pneumonia (PN), and heart failure (HF). Task demands were quantified using behavioral responses (click and time analysis). At the end of each scenario, subjective workload was measured using the NASA-Task-Load Index (NASA-TLX). Physiological workload was measured using pupillary dilation and electroencephalography (EEG) data collected throughout the scenarios. Performance was quantified based on the maximum severity of omission errors. Data analysis indicated that the PN and HF scenarios were significantly more demanding than the UTI scenario for participants using WebCIS (P < .01), and that the PN scenario was significantly more demanding than the UTI and HF scenarios for participants using Epic (P < .01). In both experiments, the regression analysis indicated a significant relationship only between task demands and performance (P < .01). Results suggest that task demands as experienced by participants are related to participants' performance. Future work may support the notion that task demands could be used as a quality metric that is likely representative of performance, and perhaps patient outcomes. The present study is a reasonable next step in a systematic assessment of how task demands and workload are related to performance in EMR-evolving environments. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Software designs of image processing tasks with incremental refinement of computation.
Anastasia, Davide; Andreopoulos, Yiannis
2010-08-01
Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.
Essays on Mathematical Optimization for Residential Demand Response in the Energy Sector
NASA Astrophysics Data System (ADS)
Palaparambil Dinesh, Lakshmi
In the electric utility industry, it could be challenging to adjust supply to match demand due to large generator ramp up times, high generation costs and insufficient in-house generation capacity. Demand response (DR) is a technique for adjusting the demand for electric power instead of the supply. Direct Load Control (DLC) is one of the ways to implement DR. DLC program participants sign up for power interruption contracts and are given financial incentives for curtailing electricity usage during peak demand time periods. This dissertation studies a DLC program for residential air conditioners using mathematical optimization models. First, we develop a model that determines what contract parameters to use in designing contracts between the provider and residential customers, when to turn which power unit on or off and how much power to cut during peak demand hours. The model uses information on customer preferences for choice of contract parameters such as DLC financial incentives and energy usage curtailment. In numerical experiments, the proposed model leads to projected cost savings of the order of 20%, compared to a current benchmark model used in practice. We also quantify the impact of factors leading to cost savings and study characteristics of customers picked by different contracts. Second, we study a DLC program in a macro economic environment using a Computable General Equilibrium (CGE) model. A CGE model is used to study the impact of external factors such as policy and technology changes on different economic sectors. Here we differentiate customers based on their preference for DLC programs by using different values for price elasticity of demand for electricity commodity. Consequently, DLC program customers could substitute demand for electricity commodity with other commodities such as transportation sector. Price elasticity of demand is calculated using a novel methodology that incorporates customer preferences for DLC contracts from the first model. The calculation of elasticity based on our methodology is useful since the prices of commodities are not only determined by aggregate demand and supply but also by customers' relative preferences for commodities. In addition to this we quantify the indirect substitution and rebound effects on sectoral activity levels, incomes and prices based on customer differences, when DLC is implemented.
Real-time quasi-3D tomographic reconstruction
NASA Astrophysics Data System (ADS)
Buurlage, Jan-Willem; Kohr, Holger; Palenstijn, Willem Jan; Joost Batenburg, K.
2018-06-01
Developments in acquisition technology and a growing need for time-resolved experiments pose great computational challenges in tomography. In addition, access to reconstructions in real time is a highly demanded feature but has so far been out of reach. We show that by exploiting the mathematical properties of filtered backprojection-type methods, having access to real-time reconstructions of arbitrarily oriented slices becomes feasible. Furthermore, we present , software for visualization and on-demand reconstruction of slices. A user of can interactively shift and rotate slices in a GUI, while the software updates the slice in real time. For certain use cases, the possibility to study arbitrarily oriented slices in real time directly from the measured data provides sufficient visual and quantitative insight. Two such applications are discussed in this article.
NASA Astrophysics Data System (ADS)
Khodachenko, Maxim; Miller, Steven; Stoeckler, Robert; Topf, Florian
2010-05-01
Computational modeling and observational data analysis are two major aspects of the modern scientific research. Both appear nowadays under extensive development and application. Many of the scientific goals of planetary space missions require robust models of planetary objects and environments as well as efficient data analysis algorithms, to predict conditions for mission planning and to interpret the experimental data. Europe has great strength in these areas, but it is insufficiently coordinated; individual groups, models, techniques and algorithms need to be coupled and integrated. Existing level of scientific cooperation and the technical capabilities for operative communication, allow considerable progress in the development of a distributed international Research Infrastructure (RI) which is based on the existing in Europe computational modelling and data analysis centers, providing the scientific community with dedicated services in the fields of their computational and data analysis expertise. These services will appear as a product of the collaborative communication and joint research efforts of the numerical and data analysis experts together with planetary scientists. The major goal of the EUROPLANET-RI / EMDAF is to make computational models and data analysis algorithms associated with particular national RIs and teams, as well as their outputs, more readily available to their potential user community and more tailored to scientific user requirements, without compromising front-line specialized research on model and data analysis algorithms development and software implementation. This objective will be met through four keys subdivisions/tasks of EMAF: 1) an Interactive Catalogue of Planetary Models; 2) a Distributed Planetary Modelling Laboratory; 3) a Distributed Data Analysis Laboratory, and 4) enabling Models and Routines for High Performance Computing Grids. Using the advantages of the coordinated operation and efficient communication between the involved computational modelling, research and data analysis expert teams and their related research infrastructures, EMDAF will provide a 1) flexible, 2) scientific user oriented, 3) continuously developing and fast upgrading computational and data analysis service to support and intensify the European planetary scientific research. At the beginning EMDAF will create a set of demonstrators and operational tests of this service in key areas of European planetary science. This work will aim at the following objectives: (a) Development and implementation of tools for distant interactive communication between the planetary scientists and computing experts (including related RIs); (b) Development of standard routine packages, and user-friendly interfaces for operation of the existing numerical codes and data analysis algorithms by the specialized planetary scientists; (c) Development of a prototype of numerical modelling services "on demand" for space missions and planetary researchers; (d) Development of a prototype of data analysis services "on demand" for space missions and planetary researchers; (e) Development of a prototype of coordinated interconnected simulations of planetary phenomena and objects (global multi-model simulators); (f) Providing the demonstrators of a coordinated use of high performance computing facilities (super-computer networks), done in cooperation with European HPC Grid DEISA.
An Adaptive Multilevel Security Framework for the Data Stored in Cloud Environment
Dorairaj, Sudha Devi; Kaliannan, Thilagavathy
2015-01-01
Cloud computing is renowned for delivering information technology services based on internet. Nowadays, organizations are interested in moving their massive data and computations into cloud to reap their significant benefits of on demand service, resource pooling, and rapid elasticity that helps to satisfy the dynamically changing infrastructure demand without the burden of owning, managing, and maintaining it. Since the data needs to be secured throughout its life cycle, security of the data in cloud is a major challenge to be concentrated on because the data is in third party's premises. Any uniform simple or high level security method for all the data either compromises the sensitive data or proves to be too costly with increased overhead. Any common multiple method for all data becomes vulnerable when the common security pattern is identified at the event of successful attack on any information and also encourages more attacks on all other data. This paper suggests an adaptive multilevel security framework based on cryptography techniques that provide adequate security for the classified data stored in cloud. The proposed security system acclimates well for cloud environment and is also customizable and more reliant to meet the required level of security of data with different sensitivity that changes with business needs and commercial conditions. PMID:26258165
An Adaptive Multilevel Security Framework for the Data Stored in Cloud Environment.
Dorairaj, Sudha Devi; Kaliannan, Thilagavathy
2015-01-01
Cloud computing is renowned for delivering information technology services based on internet. Nowadays, organizations are interested in moving their massive data and computations into cloud to reap their significant benefits of on demand service, resource pooling, and rapid elasticity that helps to satisfy the dynamically changing infrastructure demand without the burden of owning, managing, and maintaining it. Since the data needs to be secured throughout its life cycle, security of the data in cloud is a major challenge to be concentrated on because the data is in third party's premises. Any uniform simple or high level security method for all the data either compromises the sensitive data or proves to be too costly with increased overhead. Any common multiple method for all data becomes vulnerable when the common security pattern is identified at the event of successful attack on any information and also encourages more attacks on all other data. This paper suggests an adaptive multilevel security framework based on cryptography techniques that provide adequate security for the classified data stored in cloud. The proposed security system acclimates well for cloud environment and is also customizable and more reliant to meet the required level of security of data with different sensitivity that changes with business needs and commercial conditions.
A Computer Interview for Multivariate Monitoring of Psychiatric Outcome.
ERIC Educational Resources Information Center
Stevenson, John F.; And Others
Application of computer technology to psychiatric outcome measurement offers the promise of coping with increasing demands for extensive patient interviews repeated longitudinally. Described is the development of a cost-effective multi-dimensional tracking device to monitor psychiatric functioning, building on a previous local computer interview…
ERIC Educational Resources Information Center
Snapp, Robert R.; Neumann, Maureen D.
2015-01-01
The rapid growth of digital technology, including the worldwide adoption of mobile and embedded computers, places new demands on K-grade 12 educators and their students. Young people should have an opportunity to learn the technical knowledge of computer science (e.g., computer programming, mathematical logic, and discrete mathematics) in order to…
NASA Astrophysics Data System (ADS)
Yu, Jonas C. P.; Wee, H. M.; Yang, P. C.; Wu, Simon
2016-06-01
One of the supply chain risks for hi-tech products is the result of rapid technological innovation; it results in a significant decline in the selling price and demand after the initial launch period. Hi-tech products include computers and communication consumer's products. From a practical standpoint, a more realistic replenishment policy is needed to consider the impact of risks; especially when some portions of shortages are lost. In this paper, suboptimal and optimal order policies with partial backordering are developed for a buyer when the component cost, the selling price, and the demand rate decline at a continuous rate. Two mathematical models are derived and discussed: one model has the suboptimal solution with the fixed replenishment interval and a simpler computational process; the other one has the optimal solution with the varying replenishment interval and a more complicated computational process. The second model results in more profit. Numerical examples are provided to illustrate the two replenishment models. Sensitivity analysis is carried out to investigate the relationship between the parameters and the net profit.
Cloud computing: a new business paradigm for biomedical information sharing.
Rosenthal, Arnon; Mork, Peter; Li, Maya Hao; Stanford, Jean; Koester, David; Reynolds, Patti
2010-04-01
We examine how the biomedical informatics (BMI) community, especially consortia that share data and applications, can take advantage of a new resource called "cloud computing". Clouds generally offer resources on demand. In most clouds, charges are pay per use, based on large farms of inexpensive, dedicated servers, sometimes supporting parallel computing. Substantial economies of scale potentially yield costs much lower than dedicated laboratory systems or even institutional data centers. Overall, even with conservative assumptions, for applications that are not I/O intensive and do not demand a fully mature environment, the numbers suggested that clouds can sometimes provide major improvements, and should be seriously considered for BMI. Methodologically, it was very advantageous to formulate analyses in terms of component technologies; focusing on these specifics enabled us to bypass the cacophony of alternative definitions (e.g., exactly what does a cloud include) and to analyze alternatives that employ some of the component technologies (e.g., an institution's data center). Relative analyses were another great simplifier. Rather than listing the absolute strengths and weaknesses of cloud-based systems (e.g., for security or data preservation), we focus on the changes from a particular starting point, e.g., individual lab systems. We often find a rough parity (in principle), but one needs to examine individual acquisitions--is a loosely managed lab moving to a well managed cloud, or a tightly managed hospital data center moving to a poorly safeguarded cloud? 2009 Elsevier Inc. All rights reserved.
Management of eWork health issues: a new perspective on an old problem.
Kirk, Elizabeth; Strong, Jenny
2010-01-01
Contact centres are vehicles for a rapidly growing group of knowledge workers, or eWorkers. Using computers and high-speed telecommunications connections as work tools, these employees spend long hours performing mentally demanding work while maintaining static, physically stressful, seated positions. The complex interplay between job demands, work environment, and individual differences combine to produce high levels of physical discomfort among eWorkers. This paper discusses a new view that has emerged, one that focuses on the management rather than the elimination of work related upper limb disorders (WRULD) and computer vision syndrome (CVS) issues that are prevalent among eWorkers. It also reviews a cultural shift among practitioners and business that moves towards a consultative process and the sharing of knowledge among all stakeholders. The controlled work conditions and large single location workforce found within contact centres provide the opportunity to understand the personal and industry cost of eWork injuries and the ability to develop and review new multifaceted interventions. Advances in training and workplace design aimed at decreasing discomfort and injury and reducing the associated economic burden may then be adapted for all eWorkforce groups.
Software-defined optical network for metro-scale geographically distributed data centers.
Samadi, Payman; Wen, Ke; Xu, Junjie; Bergman, Keren
2016-05-30
The emergence of cloud computing and big data has rapidly increased the deployment of small and mid-sized data centers. Enterprises and cloud providers require an agile network among these data centers to empower application reliability and flexible scalability. We present a software-defined inter data center network to enable on-demand scale out of data centers on a metro-scale optical network. The architecture consists of a combined space/wavelength switching platform and a Software-Defined Networking (SDN) control plane equipped with a wavelength and routing assignment module. It enables establishing transparent and bandwidth-selective connections from L2/L3 switches, on-demand. The architecture is evaluated in a testbed consisting of 3 data centers, 5-25 km apart. We successfully demonstrated end-to-end bulk data transfer and Virtual Machine (VM) migrations across data centers with less than 100 ms connection setup time and close to full link capacity utilization.
Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; ...
2015-07-13
Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less
NASA Astrophysics Data System (ADS)
Dörner, Ralf; Lok, Benjamin; Broll, Wolfgang
Backed by a large consumer market, entertainment and education applications have spurred developments in the fields of real-time rendering and interactive computer graphics. Relying on Computer Graphics methodologies, Virtual Reality and Augmented Reality benefited indirectly from this; however, there is no large scale demand for VR and AR in gaming and learning. What are the shortcomings of current VR/AR technology that prevent a widespread use in these application areas? What advances in VR/AR will be necessary? And what might future “VR-enhanced” gaming and learning look like? Which role can and will Virtual Humans play? Concerning these questions, this article analyzes the current situation and provides an outlook on future developments. The focus is on social gaming and learning.
Evaluating the Efficacy of the Cloud for Cluster Computation
NASA Technical Reports Server (NTRS)
Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom
2012-01-01
Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.
Commercial Demand Module - NEMS Documentation
2017-01-01
Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...
2016-09-29
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
[Development of automatic urine monitoring system].
Wei, Liang; Li, Yongqin; Chen, Bihua
2014-03-01
An automatic urine monitoring system is presented to replace manual operation. The system is composed of the flow sensor, MSP430f149 single chip microcomputer, human-computer interaction module, LCD module, clock module and memory module. The signal of urine volume is captured when the urine flows through the flow sensor and then displayed on the LCD after data processing. The experiment results suggest that the design of the monitor provides a high stability, accurate measurement and good real-time, and meets the demand of the clinical application.
An integrated communications demand model
NASA Astrophysics Data System (ADS)
Doubleday, C. F.
1980-11-01
A computer model of communications demand is being developed to permit dynamic simulations of the long-term evolution of demand for communications media in the U.K. to be made under alternative assumptions about social, economic and technological trends in British Telecom's business environment. The context and objectives of the project and the potential uses of the model are reviewed, and four key concepts in the demand for communications media, around which the model is being structured are discussed: (1) the generation of communications demand; (2) substitution between media; (3) technological convergence; and (4) competition. Two outline perspectives on the model itself are given.
Zander, Thorsten O; Kothe, Christian
2011-04-01
Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.
Molecular docking performance evaluated on the D3R Grand Challenge 2015 drug-like ligand datasets
NASA Astrophysics Data System (ADS)
Selwa, Edithe; Martiny, Virginie Y.; Iorga, Bogdan I.
2016-09-01
The D3R Grand Challenge 2015 was focused on two protein targets: Heat Shock Protein 90 (HSP90) and Mitogen-Activated Protein Kinase Kinase Kinase Kinase 4 (MAP4K4). We used a protocol involving a preliminary analysis of the available data in PDB and PubChem BioAssay, and then a docking/scoring step using more computationally demanding parameters that were required to provide more reliable predictions. We could evidence that different docking software and scoring functions can behave differently on individual ligand datasets, and that the flexibility of specific binding site residues is a crucial element to provide good predictions.
Educational Technology on Demand: It's about Time!
ERIC Educational Resources Information Center
Weir, Bob; Mickool, Rick; Hitch, Leslie
2006-01-01
Today's incoming freshmen, born in 1988, have never known a time when the Internet and personal computers were not ubiquitous. They expect "what I want, when I need it, wherever I happen to be, on whatever workstation I have available." Many industries already meet this demand--entertainment (legal or pirated), cable TV, digital video recorders,…
Dynamic SLA Negotiation in Autonomic Federated Environments
NASA Astrophysics Data System (ADS)
Rubach, Pawel; Sobolewski, Michael
Federated computing environments offer requestors the ability to dynamically invoke services offered by collaborating providers in the virtual service network. Without an efficient resource management that includes Dynamic SLA Negotiation, however, the assignment of providers to customer's requests cannot be optimized and cannot offer high reliability without relevant SLA guarantees. We propose a new SLA-based SERViceable Metacomputing Environment (SERVME) capable of matching providers based on QoS requirements and performing autonomic provisioning and deprovisioning of services according to dynamic requestor needs. This paper presents the SLA negotiation process that includes on-demand provisioning and uses an object-oriented SLA model for large-scale service-oriented systems supported by SERVME. An initial reference implementation in the SORCER environment is also described.
Cloud based emergency health care information service in India.
Karthikeyan, N; Sukanesh, R
2012-12-01
A hospital is a health care organization providing patient treatment by expert physicians, surgeons and equipments. A report from a health care accreditation group says that miscommunication between patients and health care providers is the reason for the gap in providing emergency medical care to people in need. In developing countries, illiteracy is the major key root for deaths resulting from uncertain diseases constituting a serious public health problem. Mentally affected, differently abled and unconscious patients can't communicate about their medical history to the medical practitioners. Also, Medical practitioners can't edit or view DICOM images instantly. Our aim is to provide palm vein pattern recognition based medical record retrieval system, using cloud computing for the above mentioned people. Distributed computing technology is coming in the new forms as Grid computing and Cloud computing. These new forms are assured to bring Information Technology (IT) as a service. In this paper, we have described how these new forms of distributed computing will be helpful for modern health care industries. Cloud Computing is germinating its benefit to industrial sectors especially in medical scenarios. In Cloud Computing, IT-related capabilities and resources are provided as services, via the distributed computing on-demand. This paper is concerned with sprouting software as a service (SaaS) by means of Cloud computing with an aim to bring emergency health care sector in an umbrella with physical secured patient records. In framing the emergency healthcare treatment, the crucial thing considered necessary to decide about patients is their previous health conduct records. Thus a ubiquitous access to appropriate records is essential. Palm vein pattern recognition promises a secured patient record access. Likewise our paper reveals an efficient means to view, edit or transfer the DICOM images instantly which was a challenging task for medical practitioners in the past years. We have developed two services for health care. 1. Cloud based Palm vein recognition system 2. Distributed Medical image processing tools for medical practitioners.
Chattopadhyay, Sudip; Chaudhuri, Rajat K; Freed, Karl F
2011-04-28
The improved virtual orbital-complete active space configuration interaction (IVO-CASCI) method enables an economical and reasonably accurate treatment of static correlation in systems with significant multireference character, even when using a moderate basis set. This IVO-CASCI method supplants the computationally more demanding complete active space self-consistent field (CASSCF) method by producing comparable accuracy with diminished computational effort because the IVO-CASCI approach does not require additional iterations beyond an initial SCF calculation, nor does it encounter convergence difficulties or multiple solutions that may be found in CASSCF calculations. Our IVO-CASCI analytical gradient approach is applied to compute the equilibrium geometry for the ground and lowest excited state(s) of the theoretically very challenging 2,6-pyridyne, 1,2,3-tridehydrobenzene and 1,3,5-tridehydrobenzene anionic systems for which experiments are lacking, accurate quantum calculations are almost completely absent, and commonly used calculations based on single reference configurations fail to provide reasonable results. Hence, the computational complexity provides an excellent test for the efficacy of multireference methods. The present work clearly illustrates that the IVO-CASCI analytical gradient method provides a good description of the complicated electronic quasi-degeneracies during the geometry optimization process for the radicaloid anions. The IVO-CASCI treatment produces almost identical geometries as the CASSCF calculations (performed for this study) at a fraction of the computational labor. Adiabatic energy gaps to low lying excited states likewise emerge from the IVO-CASCI and CASSCF methods as very similar. We also provide harmonic vibrational frequencies to demonstrate the stability of the computed geometries.
Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets
Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L
2014-01-01
Background As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Methods Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Results Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Conclusions Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. PMID:24464852
The Crazy Business of Internet Peeping, Privacy, and Anonymity.
ERIC Educational Resources Information Center
Van Horn, Royal
2000-01-01
Peeping software takes several forms and can be used on a network or to monitor a certain computer. E-Mail Plus, for example, hides inside a computer and sends exact copies of incoming or outgoing e-mail anywhere. School staff with monitored computers should demand e-mail privacy. (MLH)
The baby boom, the baby bust, and the housing market.
Mankiw, N G; Weil, D N
1989-05-01
This paper explores the impact of demographic changes on the housing market in the US, 1st by reviewing the facts about the Baby Boom, 2nd by linking age and housing demand using census data for 1970 and 1980, 3rd by computing the effect of demand on price of housing and on the quantity of residential capital, and last by constructing a theoretical model to plot the predictability of the jump in demand caused by the Baby Boom. The Baby Boom in the U.S. lasted from 1946-1964, with a peak in 1957 when 4.3 million babies were born. In 1980 19.7% of the population were aged 20-30, compared to 13.3% in 1960. Demand for housing was modeled for a given household from census data, resulting in the finding that demand rises sharply at age 20-30, then declines after age 40 by 1% per year. Thus between 1970 and 1980 the real value of housing for an adult at any given age jumped 50%, while the real disposable personal income per capita rose 22%. The structure of demand is such that the swelling in the rate of growth in housing demand peaked in 1980, with a rate of 1.66% per year. Housing demand and real price of housing were highly correlated and inelastic. If this relationship holds in the future, the real price of housing should fall about 3% per year, or 47% by 2007. The theoretical model, a variation of the Poterba model, ignoring inflation and taxation, suggests that fluctuations in prices caused by changes in demand are not foreseen by the market, even though they are predictable in principle 20 years in advance. As the effects of falling housing prices become apparent, there may be a potential for economic instability, but people may be induced to save more because their homes will no longer provide the funds for retirement.
An Algorithm for Pedestrian Detection in Multispectral Image Sequences
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.
2017-05-01
The growing interest for self-driving cars provides a demand for scene understanding and obstacle detection algorithms. One of the most challenging problems in this field is the problem of pedestrian detection. Main difficulties arise from a diverse appearances of pedestrians. Poor visibility conditions such as fog and low light conditions also significantly decrease the quality of pedestrian detection. This paper presents a new optical flow based algorithm BipedDetet that provides robust pedestrian detection on a single-borad computer. The algorithm is based on the idea of simplified Kalman filtering suitable for realization on modern single-board computers. To detect a pedestrian a synthetic optical flow of the scene without pedestrians is generated using slanted-plane model. The estimate of a real optical flow is generated using a multispectral image sequence. The difference of the synthetic optical flow and the real optical flow provides the optical flow induced by pedestrians. The final detection of pedestrians is done by the segmentation of the difference of optical flows. To evaluate the BipedDetect algorithm a multispectral dataset was collected using a mobile robot.
GATECloud.net: a platform for large-scale, open-source text processing on the cloud.
Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina
2013-01-28
Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.
NASA Astrophysics Data System (ADS)
Perez, G. L.; Larour, E. Y.; Halkides, D. J.; Cheng, D. L. C.
2015-12-01
The Virtual Ice Sheet Laboratory(VISL) is a Cryosphere outreach effort byscientists at the Jet Propulsion Laboratory(JPL) in Pasadena, CA, Earth and SpaceResearch(ESR) in Seattle, WA, and the University of California at Irvine (UCI), with the goal of providing interactive lessons for K-12 and college level students,while conforming to STEM guidelines. At the core of VISL is the Ice Sheet System Model(ISSM), an open-source project developed jointlyat JPL and UCI whose main purpose is to model the evolution of the polar ice caps in Greenland and Antarctica. By using ISSM, VISL students have access tostate-of-the-art modeling software that is being used to conduct scientificresearch by users all over the world. However, providing this functionality isby no means simple. The modeling of ice sheets in response to sea and atmospheric temperatures, among many other possible parameters, requiressignificant computational resources. Furthermore, this service needs to beresponsive and capable of handling burst requests produced by classrooms ofstudents. Cloud computing providers represent a burgeoning industry. With majorinvestments by tech giants like Amazon, Google and Microsoft, it has never beeneasier or more affordable to deploy computational elements on-demand. This isexactly what VISL needs and ISSM is capable of. Moreover, this is a promisingalternative to investing in expensive and rapidly devaluing hardware.
NASA Technical Reports Server (NTRS)
1990-01-01
Optacon II uses the same basic technique of converting printed information into a tactile image as did Optacon. Optacon II can also be connected directly to a personal computer, which opens up a new range of job opportunities for the blind. Optacon II is not limited to reading printed words, it can convert any graphic image viewed by the camera. Optacon II demands extensive training for blind operators. TSI provides 60-hour training courses at its Mountain View headquarters and at training centers around the world. TeleSensory discontinued production of the Optacon as of December 1996.
A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation
NASA Technical Reports Server (NTRS)
Jones, Brandon A.; Anderson, Rodney L.
2012-01-01
Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.
The Effect of Computers on School Air-Conditioning.
ERIC Educational Resources Information Center
Fickes, Michael
2000-01-01
Discusses the issue of increased air-conditioning demand when schools equip their classrooms with computers that require enhanced and costlier air-conditioning systems. Air-conditioning costs are analyzed in two elementary schools and a middle school. (GR)
The demand for consumer health information.
Wagner, T H; Hu, T W; Hibbard, J H
2001-11-01
Using data from an evaluation of a community-wide informational intervention, we modeled the demand for medical reference books, telephone advice nurses, and computers for health information. Data were gathered from random household surveys in Boise, ID (experimental site), Billings, MT, and Eugene, OR (control sites). Conditional difference-in-differences show that the intervention increased the use of medical reference books, advice nurses, and computers for health information by approximately 15, 6, and 4%. respectively. The results also suggest that the intervention was associated with a decreased reliance on health professionals for information.
Open Science in the Cloud: Towards a Universal Platform for Scientific and Statistical Computing
NASA Astrophysics Data System (ADS)
Chine, Karim
The UK, through the e-Science program, the US through the NSF-funded cyber infrastructure and the European Union through the ICT Calls aimed to provide "the technological solution to the problem of efficiently connecting data, computers, and people with the goal of enabling derivation of novel scientific theories and knowledge".1 The Grid (Foster, 2002; Foster; Kesselman, Nick, & Tuecke, 2002), foreseen as a major accelerator of discovery, didn't meet the expectations it had excited at its beginnings and was not adopted by the broad population of research professionals. The Grid is a good tool for particle physicists and it has allowed them to tackle the tremendous computational challenges inherent to their field. However, as a technology and paradigm for delivering computing on demand, it doesn't work and it can't be fixed. On one hand, "the abstractions that Grids expose - to the end-user, to the deployers and to application developers - are inappropriate and they need to be higher level" (Jha, Merzky, & Fox), and on the other hand, academic Grids are inherently economically unsustainable. They can't compete with a service outsourced to the Industry whose quality and price would be driven by market forces. The virtualization technologies and their corollary, the Infrastructure-as-a-Service (IaaS) style cloud, hold the promise to enable what the Grid failed to deliver: a sustainable environment for computational sciences that would lower the barriers for accessing federated computational resources, software tools and data; enable collaboration and resources sharing and provide the building blocks of a ubiquitous platform for traceable and reproducible computational research.
GPU-accelerated phase extraction algorithm for interferograms: a real-time application
NASA Astrophysics Data System (ADS)
Zhu, Xiaoqiang; Wu, Yongqian; Liu, Fengwei
2016-11-01
Optical testing, having the merits of non-destruction and high sensitivity, provides a vital guideline for optical manufacturing. But the testing process is often computationally intensive and expensive, usually up to a few seconds, which is sufferable for dynamic testing. In this paper, a GPU-accelerated phase extraction algorithm is proposed, which is based on the advanced iterative algorithm. The accelerated algorithm can extract the right phase-distribution from thirteen 1024x1024 fringe patterns with arbitrary phase shifts in 233 milliseconds on average using NVIDIA Quadro 4000 graphic card, which achieved a 12.7x speedup ratio than the same algorithm executed on CPU and 6.6x speedup ratio than that on Matlab using DWANING W5801 workstation. The performance improvement can fulfill the demand of computational accuracy and real-time application.
Hot Rolling Scrap Reduction through Edge Cracking and Surface Defects Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaudoin, Armand
2016-05-29
The design of future aircraft must address the combined demands for fuel efficiency, reduced emissions and lower operating costs. One contribution to these goals is weight savings through the development of new alloys and design techniques for airframe structures. This research contributes to the light-weighting through fabrication of monolithic components from advanced aluminum alloys by making a link between alloy processing history and in-service performance. Specifically, this research demonstrates the link between growing cracks with features of the alloy microstructure that follow from thermo-mechanical processing. This is achieved through a computer model of crack deviation. The model is validated againstmore » experimental data from production scale aluminum alloy plate, and demonstration of the effect of changes in processing history on crack growth is made. The model is cast in the open-source finite element code WARP3D, which is freely downloadable and well documented. This project provides benefit along several avenues. First, the technical contribution of the computer model offers the materials engineer a critical means of providing guidance both upstream, to process tuning to achieve optimal properties, and downstream, to enhance fault tolerance. Beyond the fuel savings and emissions reduction inherent in the light-weighting of aircraft structures, improved fault tolerance addresses demands for longer inspection intervals over baseline, and a lower life cycle cost.« less
A general equilibrium model of a production economy with asset markets
NASA Astrophysics Data System (ADS)
Raberto, Marco; Teglio, Andrea; Cincotti, Silvano
2006-10-01
In this paper, a general equilibrium model of a monetary production economy is presented. The model is characterized by three classes of agents: a representative firm, heterogeneous households, and the government. Two markets (i.e., a labour market and a goods market, are considered) and two assets are traded in exchange of money, namely, government bonds and equities. Households provide the labour force and decide on consumption and savings, whereas the firm provides consumption goods and demands labour. The government receives taxes from households and pays interests on debt. The Walrasian equilibrium is derived analytically. The dynamics through quantity constrained equilibria out from the Walrasian equilibrium is also studied by means of computer simulations.
ERIC Educational Resources Information Center
Morris, Kathleen M.
2010-01-01
Today's college students are often labeled the "Net Generation" and assumed to be computer savvy and technological minded. Exposure to and use of technologies can increase self-efficacy regarding ability to complete desired computer tasks, but students arrive on campuses unable to pass computer proficiency exams. This is concerning because some…
Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road.
Bildosola, Iñaki; Río-Belver, Rosa; Cilleruelo, Ernesto; Garechana, Gaizka
2015-01-01
Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible.
Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road
Bildosola, Iñaki; Río-Belver, Rosa; Cilleruelo, Ernesto; Garechana, Gaizka
2015-01-01
Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on “on-demand payment” for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible. PMID:26230400
Approaches to Enable Demand Response by Industrial Loads for Ancillary Services Provision
NASA Astrophysics Data System (ADS)
Zhang, Xiao
Demand response has gained significant attention in recent years as it demonstrates potentials to enhance the power system's operational flexibility in a cost-effective way. Industrial loads such as aluminum smelters, steel manufacturers, and cement plants demonstrate advantages in supporting power system operation through demand response programs, because of their intensive power consumption, already existing advanced monitoring and control infrastructure, and the strong economic incentive due to the high energy costs. In this thesis, we study approaches to efficiently integrate each of these types of manufacturing processes as demand response resources. The aluminum smelting process is able to change its power consumption both accurately and quickly by controlling the pots' DC voltage, without affecting the production quality. Hence, an aluminum smelter has both the motivation and the ability to participate in demand response. First, we focus on determining the optimal regulation capacity that such a manufacturing plant should provide. Next, we focus on determining its optimal bidding strategy in the day-ahead energy and ancillary services markets. Electric arc furnaces (EAFs) in steel manufacturing consume a large amount of electric energy. However, a steel plant can take advantage of time-based electricity prices by optimally arranging energy-consuming activities to avoid peak hours. We first propose scheduling methods that incorporate the EAFs' flexibilities to reduce the electricity cost. We then propose methods to make the computations more tractable. Finally, we extend the scheduling formulations to enable the provision of spinning reserve. Cement plants are able to quickly adjust their power consumption rate by switching on/off the crushers. However, switching on/off the loading units only achieves discrete power changes, which restricts the load from offering valuable ancillary services such as regulation and load following, as continuous power changes are required for these services. We propose methods that enable these services with the support of an on-site energy storage device. As demonstrated by the case studies, the proposed approaches are effective and can generate practical production instructions for the industrial loads. This thesis not only provides methods to enable demand response by industrial loads but also potentially encourages industrial loads to be active in electricity markets.
ERIC Educational Resources Information Center
Hsieh, Tung-Cheng; Lee, Ming-Che; Su, Chien-Yuan
2013-01-01
In recent years, the demand for computer programming professionals has increased rapidly. These computer engineers not only play a key role in the national development of the computing and software industries, they also have a significant influence on the broader national knowledge industry. Therefore, one of the objectives of information…
The Overdominance of Computers
ERIC Educational Resources Information Center
Monke, Lowell W.
2006-01-01
Most schools are unwilling to consider decreasing computer use at school because they fear that without screen time, students will not be prepared for the demands of a high-tech 21st century. Monke argues that having young children spend a significant amount of time on computers in school is harmful, particularly when children spend so much…
Computing Careers and Irish Higher Education: A Labour Market Anomaly
ERIC Educational Resources Information Center
Stephens, Simon; O'Donnell, David; McCusker, Paul
2007-01-01
This paper explores the impact of developments in the Irish economy and labour market on computing course development in the higher education (HE) sector. Extant computing courses change, or new courses are introduced, in attempts to match labour market demands. The conclusion reached here, however, is that Irish HE is producing insufficient…
Cytobank: providing an analytics platform for community cytometry data analysis and collaboration.
Chen, Tiffany J; Kotecha, Nikesh
2014-01-01
Cytometry is used extensively in clinical and laboratory settings to diagnose and track cell subsets in blood and tissue. High-throughput, single-cell approaches leveraging cytometry are developed and applied in the computational and systems biology communities by researchers, who seek to improve the diagnosis of human diseases, map the structures of cell signaling networks, and identify new cell types. Data analysis and management present a bottleneck in the flow of knowledge from bench to clinic. Multi-parameter flow and mass cytometry enable identification of signaling profiles of patient cell samples. Currently, this process is manual, requiring hours of work to summarize multi-dimensional data and translate these data for input into other analysis programs. In addition, the increase in the number and size of collaborative cytometry studies as well as the computational complexity of analytical tools require the ability to assemble sufficient and appropriately configured computing capacity on demand. There is a critical need for platforms that can be used by both clinical and basic researchers who routinely rely on cytometry. Recent advances provide a unique opportunity to facilitate collaboration and analysis and management of cytometry data. Specifically, advances in cloud computing and virtualization are enabling efficient use of large computing resources for analysis and backup. An example is Cytobank, a platform that allows researchers to annotate, analyze, and share results along with the underlying single-cell data.
A Toolkit for ARB to Integrate Custom Databases and Externally Built Phylogenies
Essinger, Steven D.; Reichenberger, Erin; Morrison, Calvin; ...
2015-01-21
Researchers are perpetually amassing biological sequence data. The computational approaches employed by ecologists for organizing this data (e.g. alignment, phylogeny, etc.) typically scale nonlinearly in execution time with the size of the dataset. This often serves as a bottleneck for processing experimental data since many molecular studies are characterized by massive datasets. To keep up with experimental data demands, ecologists are forced to choose between continually upgrading expensive in-house computer hardware or outsourcing the most demanding computations to the cloud. Outsourcing is attractive since it is the least expensive option, but does not necessarily allow direct user interaction with themore » data for exploratory analysis. Desktop analytical tools such as ARB are indispensable for this purpose, but they do not necessarily offer a convenient solution for the coordination and integration of datasets between local and outsourced destinations. Therefore, researchers are currently left with an undesirable tradeoff between computational throughput and analytical capability. To mitigate this tradeoff we introduce a software package to leverage the utility of the interactive exploratory tools offered by ARB with the computational throughput of cloud-based resources. Our pipeline serves as middleware between the desktop and the cloud allowing researchers to form local custom databases containing sequences and metadata from multiple resources and a method for linking data outsourced for computation back to the local database. Furthermore, a tutorial implementation of the toolkit is provided in the supporting information, S1 Tutorial.« less
A Toolkit for ARB to Integrate Custom Databases and Externally Built Phylogenies
Essinger, Steven D.; Reichenberger, Erin; Morrison, Calvin; Blackwood, Christopher B.; Rosen, Gail L.
2015-01-01
Researchers are perpetually amassing biological sequence data. The computational approaches employed by ecologists for organizing this data (e.g. alignment, phylogeny, etc.) typically scale nonlinearly in execution time with the size of the dataset. This often serves as a bottleneck for processing experimental data since many molecular studies are characterized by massive datasets. To keep up with experimental data demands, ecologists are forced to choose between continually upgrading expensive in-house computer hardware or outsourcing the most demanding computations to the cloud. Outsourcing is attractive since it is the least expensive option, but does not necessarily allow direct user interaction with the data for exploratory analysis. Desktop analytical tools such as ARB are indispensable for this purpose, but they do not necessarily offer a convenient solution for the coordination and integration of datasets between local and outsourced destinations. Therefore, researchers are currently left with an undesirable tradeoff between computational throughput and analytical capability. To mitigate this tradeoff we introduce a software package to leverage the utility of the interactive exploratory tools offered by ARB with the computational throughput of cloud-based resources. Our pipeline serves as middleware between the desktop and the cloud allowing researchers to form local custom databases containing sequences and metadata from multiple resources and a method for linking data outsourced for computation back to the local database. A tutorial implementation of the toolkit is provided in the supporting information, S1 Tutorial. Availability: http://www.ece.drexel.edu/gailr/EESI/tutorial.php. PMID:25607539
Mendel-GPU: haplotyping and genotype imputation on graphics processing units
Chen, Gary K.; Wang, Kai; Stram, Alex H.; Sobel, Eric M.; Lange, Kenneth
2012-01-01
Motivation: In modern sequencing studies, one can improve the confidence of genotype calls by phasing haplotypes using information from an external reference panel of fully typed unrelated individuals. However, the computational demands are so high that they prohibit researchers with limited computational resources from haplotyping large-scale sequence data. Results: Our graphics processing unit based software delivers haplotyping and imputation accuracies comparable to competing programs at a fraction of the computational cost and peak memory demand. Availability: Mendel-GPU, our OpenCL software, runs on Linux platforms and is portable across AMD and nVidia GPUs. Users can download both code and documentation at http://code.google.com/p/mendel-gpu/. Contact: gary.k.chen@usc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22954633
The Cloud Area Padovana: from pilot to production
NASA Astrophysics Data System (ADS)
Andreetto, P.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Sgaravatto, M.; Traldi, S.; Verlato, M.; Zangrando, L.
2017-10-01
The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs. The hardware resources have been scaled horizontally and vertically, by upgrading some hypervisors and by adding new ones: currently it provides about 1100 cores. Some in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for the INFN-AAI Identity Provider as a new option for the user authentication. In collaboration with the EU-funded Indigo DataCloud project, the integration with Docker-based containers has been experimented with and will be available in production soon. This computing facility now satisfies the computational and storage demands of more than 70 users affiliated with about 20 research projects. We present here the architecture of this Cloud infrastructure, the tools and procedures used to operate it. We also focus on the lessons learnt in these two years, describing the problems that were found and the corrective actions that had to be applied. We also discuss about the chosen strategy for upgrades, which combines the need to promptly integrate the OpenStack new developments, the demand to reduce the downtimes of the infrastructure, and the need to limit the effort requested for such updates. We also discuss how this Cloud infrastructure is being used. In particular we focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, based on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES, using a client-server system called TraceWin, exploits INFN’s virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.
Optimal cube-connected cube multiprocessors
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Wu, Jie
1993-01-01
Many CFD (computational fluid dynamics) and other scientific applications can be partitioned into subproblems. However, in general the partitioned subproblems are very large. They demand high performance computing power themselves, and the solutions of the subproblems have to be combined at each time step. The cube-connect cube (CCCube) architecture is studied. The CCCube architecture is an extended hypercube structure with each node represented as a cube. It requires fewer physical links between nodes than the hypercube, and provides the same communication support as the hypercube does on many applications. The reduced physical links can be used to enhance the bandwidth of the remaining links and, therefore, enhance the overall performance. The concept and the method to obtain optimal CCCubes, which are the CCCubes with a minimum number of links under a given total number of nodes, are proposed. The superiority of optimal CCCubes over standard hypercubes was also shown in terms of the link usage in the embedding of a binomial tree. A useful computation structure based on a semi-binomial tree for divide-and-conquer type of parallel algorithms was identified. It was shown that this structure can be implemented in optimal CCCubes without performance degradation compared with regular hypercubes. The result presented should provide a useful approach to design of scientific parallel computers.
Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A
2010-01-01
Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.
NAS Demand Predictions, Transportation Systems Analysis Model (TSAM) Compared with Other Forecasts
NASA Technical Reports Server (NTRS)
Viken, Jeff; Dollyhigh, Samuel; Smith, Jeremy; Trani, Antonio; Baik, Hojong; Hinze, Nicholas; Ashiabor, Senanu
2006-01-01
The current work incorporates the Transportation Systems Analysis Model (TSAM) to predict the future demand for airline travel. TSAM is a multi-mode, national model that predicts the demand for all long distance travel at a county level based upon population and demographics. The model conducts a mode choice analysis to compute the demand for commercial airline travel based upon the traveler s purpose of the trip, value of time, cost and time of the trip,. The county demand for airline travel is then aggregated (or distributed) to the airport level, and the enplanement demand at commercial airports is modeled. With the growth in flight demand, and utilizing current airline flight schedules, the Fratar algorithm is used to develop future flight schedules in the NAS. The projected flights can then be flown through air transportation simulators to quantify the ability of the NAS to meet future demand. A major strength of the TSAM analysis is that scenario planning can be conducted to quantify capacity requirements at individual airports, based upon different future scenarios. Different demographic scenarios can be analyzed to model the demand sensitivity to them. Also, it is fairly well know, but not well modeled at the airport level, that the demand for travel is highly dependent on the cost of travel, or the fare yield of the airline industry. The FAA projects the fare yield (in constant year dollars) to keep decreasing into the future. The magnitude and/or direction of these projections can be suspect in light of the general lack of airline profits and the large rises in airline fuel cost. Also, changes in travel time and convenience have an influence on the demand for air travel, especially for business travel. Future planners cannot easily conduct sensitivity studies of future demand with the FAA TAF data, nor with the Boeing or Airbus projections. In TSAM many factors can be parameterized and various demand sensitivities can be predicted for future travel. These resulting demand scenarios can be incorporated into future flight schedules, therefore providing a quantifiable demand for flights in the NAS for a range of futures. In addition, new future airline business scenarios are investigated that illustrate when direct flights can replace connecting flights and larger aircraft can be substituted, only when justified by demand.
Utilizing Traveler Demand Modeling to Predict Future Commercial Flight Schedules in the NAS
NASA Technical Reports Server (NTRS)
Viken, Jeff; Dollyhigh, Samuel; Smith, Jeremy; Trani, Antonio; Baik, Hojong; Hinze, Nicholas; Ashiabor, Senanu
2006-01-01
The current work incorporates the Transportation Systems Analysis Model (TSAM) to predict the future demand for airline travel. TSAM is a multi-mode, national model that predicts the demand for all long distance travel at a county level based upon population and demographics. The model conducts a mode choice analysis to compute the demand for commercial airline travel based upon the traveler s purpose of the trip, value of time, cost and time of the trip,. The county demand for airline travel is then aggregated (or distributed) to the airport level, and the enplanement demand at commercial airports is modeled. With the growth in flight demand, and utilizing current airline flight schedules, the Fratar algorithm is used to develop future flight schedules in the NAS. The projected flights can then be flown through air transportation simulators to quantify the ability of the NAS to meet future demand. A major strength of the TSAM analysis is that scenario planning can be conducted to quantify capacity requirements at individual airports, based upon different future scenarios. Different demographic scenarios can be analyzed to model the demand sensitivity to them. Also, it is fairly well know, but not well modeled at the airport level, that the demand for travel is highly dependent on the cost of travel, or the fare yield of the airline industry. The FAA projects the fare yield (in constant year dollars) to keep decreasing into the future. The magnitude and/or direction of these projections can be suspect in light of the general lack of airline profits and the large rises in airline fuel cost. Also, changes in travel time and convenience have an influence on the demand for air travel, especially for business travel. Future planners cannot easily conduct sensitivity studies of future demand with the FAA TAF data, nor with the Boeing or Airbus projections. In TSAM many factors can be parameterized and various demand sensitivities can be predicted for future travel. These resulting demand scenarios can be incorporated into future flight schedules, therefore providing a quantifiable demand for flights in the NAS for a range of futures. In addition, new future airline business scenarios are investigated that illustrate when direct flights can replace connecting flights and larger aircraft can be substituted, only when justified by demand.
Parallel, distributed and GPU computing technologies in single-particle electron microscopy
Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-01-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today’s technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined. PMID:19564686
Parallel, distributed and GPU computing technologies in single-particle electron microscopy.
Schmeisser, Martin; Heisen, Burkhard C; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-07-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today's technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.
New insights into faster computation of uncertainties
NASA Astrophysics Data System (ADS)
Bhattacharya, Atreyee
2012-11-01
Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.
Microsimulation Modeling for Health Decision Sciences Using R: A Tutorial.
Krijkamp, Eline M; Alarid-Escudero, Fernando; Enns, Eva A; Jalal, Hawre J; Hunink, M G Myriam; Pechlivanoglou, Petros
2018-04-01
Microsimulation models are becoming increasingly common in the field of decision modeling for health. Because microsimulation models are computationally more demanding than traditional Markov cohort models, the use of computer programming languages in their development has become more common. R is a programming language that has gained recognition within the field of decision modeling. It has the capacity to perform microsimulation models more efficiently than software commonly used for decision modeling, incorporate statistical analyses within decision models, and produce more transparent models and reproducible results. However, no clear guidance for the implementation of microsimulation models in R exists. In this tutorial, we provide a step-by-step guide to build microsimulation models in R and illustrate the use of this guide on a simple, but transferable, hypothetical decision problem. We guide the reader through the necessary steps and provide generic R code that is flexible and can be adapted for other models. We also show how this code can be extended to address more complex model structures and provide an efficient microsimulation approach that relies on vectorization solutions.
Meyer, L C
1997-06-01
This article provides an overview of the issues and effects of principle-centered health care within organized systems of care; portrays a comprehensive disease management framework for home health care; and offers virtual health management, telecommunications, and mobile computing strategies to enable health management enterprises to achieve health and outcomes maximization accountability demands in managed care.
Numerical Algorithms for Acoustic Integrals - The Devil is in the Details
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.
1996-01-01
The accurate prediction of the aeroacoustic field generated by aerospace vehicles or nonaerospace machinery is necessary for designers to control and reduce source noise. Powerful computational aeroacoustic methods, based on various acoustic analogies (primarily the Lighthill acoustic analogy) and Kirchhoff methods, have been developed for prediction of noise from complicated sources, such as rotating blades. Both methods ultimately predict the noise through a numerical evaluation of an integral formulation. In this paper, we consider three generic acoustic formulations and several numerical algorithms that have been used to compute the solutions to these formulations. Algorithms for retarded-time formulations are the most efficient and robust, but they are difficult to implement for supersonic-source motion. Collapsing-sphere and emission-surface formulations are good alternatives when supersonic-source motion is present, but the numerical implementations of these formulations are more computationally demanding. New algorithms - which utilize solution adaptation to provide a specified error level - are needed.
Tug-of-war lacunarity—A novel approach for estimating lacunarity
NASA Astrophysics Data System (ADS)
Reiss, Martin A.; Lemmerer, Birgit; Hanslmeier, Arnold; Ahammer, Helmut
2016-11-01
Modern instrumentation provides us with massive repositories of digital images that will likely only increase in the future. Therefore, it has become increasingly important to automatize the analysis of digital images, e.g., with methods from pattern recognition. These methods aim to quantify the visual appearance of captured textures with quantitative measures. As such, lacunarity is a useful multi-scale measure of texture's heterogeneity but demands high computational efforts. Here we investigate a novel approach based on the tug-of-war algorithm, which estimates lacunarity in a single pass over the image. We computed lacunarity for theoretical and real world sample images, and found that the investigated approach is able to estimate lacunarity with low uncertainties. We conclude that the proposed method combines low computational efforts with high accuracy, and that its application may have utility in the analysis of high-resolution images.
Reid, Jeffrey G; Carroll, Andrew; Veeraraghavan, Narayanan; Dahdouli, Mahmoud; Sundquist, Andreas; English, Adam; Bainbridge, Matthew; White, Simon; Salerno, William; Buhay, Christian; Yu, Fuli; Muzny, Donna; Daly, Richard; Duyk, Geoff; Gibbs, Richard A; Boerwinkle, Eric
2014-01-29
Massively parallel DNA sequencing generates staggering amounts of data. Decreasing cost, increasing throughput, and improved annotation have expanded the diversity of genomics applications in research and clinical practice. This expanding scale creates analytical challenges: accommodating peak compute demand, coordinating secure access for multiple analysts, and sharing validated tools and results. To address these challenges, we have developed the Mercury analysis pipeline and deployed it in local hardware and the Amazon Web Services cloud via the DNAnexus platform. Mercury is an automated, flexible, and extensible analysis workflow that provides accurate and reproducible genomic results at scales ranging from individuals to large cohorts. By taking advantage of cloud computing and with Mercury implemented on the DNAnexus platform, we have demonstrated a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples.
Indonesia’s Electricity Demand Dynamic Modelling
NASA Astrophysics Data System (ADS)
Sulistio, J.; Wirabhuana, A.; Wiratama, M. G.
2017-06-01
Electricity Systems modelling is one of the emerging area in the Global Energy policy studies recently. System Dynamics approach and Computer Simulation has become one the common methods used in energy systems planning and evaluation in many conditions. On the other hand, Indonesia experiencing several major issues in Electricity system such as fossil fuel domination, demand - supply imbalances, distribution inefficiency, and bio-devastation. This paper aims to explain the development of System Dynamics modelling approaches and computer simulation techniques in representing and predicting electricity demand in Indonesia. In addition, this paper also described the typical characteristics and relationship of commercial business sector, industrial sector, and family / domestic sector as electricity subsystems in Indonesia. Moreover, it will be also present direct structure, behavioural, and statistical test as model validation approach and ended by conclusions.
Adventures in Private Cloud: Balancing Cost and Capability at the CloudSat Data Processing Center
NASA Astrophysics Data System (ADS)
Partain, P.; Finley, S.; Fluke, J.; Haynes, J. M.; Cronk, H. Q.; Miller, S. D.
2016-12-01
Since the beginning of the CloudSat Mission in 2006, The CloudSat Data Processing Center (DPC) at the Cooperative Institute for Research in the Atmosphere (CIRA) has been ingesting data from the satellite and other A-Train sensors, producing data products, and distributing them to researchers around the world. The computing infrastructure was specifically designed to fulfill the requirements as specified at the beginning of what nominally was a two-year mission. The environment consisted of servers dedicated to specific processing tasks in a rigid workflow to generate the required products. To the benefit of science and with credit to the mission engineers, CloudSat has lasted well beyond its planned lifetime and is still collecting data ten years later. Over that period requirements of the data processing system have greatly expanded and opportunities for providing value-added services have presented themselves. But while demands on the system have increased, the initial design allowed for very little expansion in terms of scalability and flexibility. The design did change to include virtual machine processing nodes and distributed workflows but infrastructure management was still a time consuming task when system modification was required to run new tests or implement new processes. To address the scalability, flexibility, and manageability of the system Cloud computing methods and technologies are now being employed. The use of a public cloud like Amazon Elastic Compute Cloud or Google Compute Engine was considered but, among other issues, data transfer and storage cost becomes a problem especially when demand fluctuates as a result of reprocessing and the introduction of new products and services. Instead, the existing system was converted to an on premises private Cloud using the OpenStack computing platform and Ceph software defined storage to reap the benefits of the Cloud computing paradigm. This work details the decisions that were made, the benefits that have been realized, the difficulties that were encountered and issues that still exist.
A new perspective on the perceptual selectivity of attention under load.
Giesbrecht, Barry; Sy, Jocelyn; Bundesen, Claus; Kyllingsbaek, Søren
2014-05-01
The human attention system helps us cope with a complex environment by supporting the selective processing of information relevant to our current goals. Understanding the perceptual, cognitive, and neural mechanisms that mediate selective attention is a core issue in cognitive neuroscience. One prominent model of selective attention, known as load theory, offers an account of how task demands determine when information is selected and an account of the efficiency of the selection process. However, load theory has several critical weaknesses that suggest that it is time for a new perspective. Here we review the strengths and weaknesses of load theory and offer an alternative biologically plausible computational account that is based on the neural theory of visual attention. We argue that this new perspective provides a detailed computational account of how bottom-up and top-down information is integrated to provide efficient attentional selection and allocation of perceptual processing resources. © 2014 New York Academy of Sciences.
I/O Router Placement and Fine-Grained Routing on Titan to Support Spider II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezell, Matthew A; Dillow, David; Oral, H Sarp
2014-01-01
The Oak Ridge Leadership Computing Facility (OLCF) introduced the concept of Fine-Grained Routing in 2008 to improve I/O performance between the Jaguar supercomputer and Spider, OLCF s center-wide Lustre file system. Fine-grained routing organizes I/O paths to minimize congestion. Jaguar has since been upgraded to Titan, providing more than a ten-fold improvement in peak performance. To support the center s increased computational capacity and I/O demand, the Spider file system has been replaced with Spider II. Building on the lessons learned from Spider, an improved method for placing LNET routers was developed and implemented for Spider II. The fine-grained routingmore » scripts and configuration have been updated to provide additional optimizations and better match the system setup. This paper presents a brief history of fine-grained routing at OLCF, an introduction to the architectures of Titan and Spider II, methods for placing routers in Titan, and details about the fine-grained routing configuration.« less
Adebayo, Bola; Durey, Angela; Slack-Smith, Linda M
2017-07-01
Information and communication technology (ICT) can provide knowledge and clinical support to those working in residential aged care facilities (RACFs). This paper aims to: (1) review literature on ICT targeted at residents, staff and external providers in RACFs including general practitioners, dental and allied health professionals on improving residents' oral health; (2) identify barriers and enablers to using ICT in promoting oral health at RACFs; and (3) investigate evidence of effectiveness of these approaches in promoting oral health. Findings from this narrative literature review indicate that ICT is not widely used in RACFs, with barriers to usage identified as limited training for staff, difficulties accessing the Internet, limited computer literacy particularly in older staff, cost and competing work demands. Residents also faced barriers including impaired cognitive and psychosocial functioning, limited computer literacy and Internet use. Findings suggest that more education and training in ICT to upskill staff and residents is needed to effectively promote oral health through this medium.
Security training with interactive laser-video-disk technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, D.
1988-01-01
DOE, through its contractor EG and G Energy Measurements, Inc., has developed a state-of-the-art interactive-video system for use at the Department of Energy's Central Training Academy. Called the Security Training and Evaluation Shooting System (STRESS), the computer-driven decision shooting system employs the latest is laservideo-disk technology. STRESS is designed to provide realistic and stressful training for security inspectors employed by the DOE and its contractors. The system uses wide-screen video projection, sophisticated scenario-branching technology, and customized video scenarios especially designed for the DOE. Firing a weapon that has been modified to shoot ''laser bullets,'' and wearing a special vest thatmore » detects ''hits'': the security inspector encounters adversaries on the wide screen who can shoot or be shot by the inspector in scenarios that demand fast decisions. Based on those decisions, the computer provides instantaneous branching to different scenes, giving the inspector confrontational training with the realism and variability of real life.« less
How wearable technologies will impact the future of health care.
Barnard, Rick; Shea, J Timothy
2004-01-01
After four hundred years of delivering health care in hospitals, industrialized countries are now shifting towards treating patients at the "point of need". This trend will likely accelerate demand for, and adoption of, wearable computing and smart fabric and interactive textile (SFIT) solutions. These healthcare solutions will be designed to provide real-time vital and diagnostic information to health care providers, patients, and related stakeholders in such a manner as to improve quality of care, reduce the cost of care, and allow patients greater control over their own health. The current market size for wearable computing and SFIT solutions is modest; however, the future outlook is extremely strong. Venture Development Corporation, a technology market research and strategy firm, was founded in 1971. Over the years, VDC has developed and implemented a unique and highly successful methodology for forecasting and analyzing highly dynamic technology markets. VDC has extensive experience in providing multi-client and proprietary analysis in the electronic components, advanced materials, and mobile computing markets.
Coupled Crop/Hydrology Model to Estimate Expanded Irrigation Impact on Water Resources
NASA Astrophysics Data System (ADS)
Handyside, C. T.; Cruise, J.
2017-12-01
A coupled agricultural and hydrologic systems model is used to examine the environmental impact of irrigation in the Southeast. A gridded crop model for the Southeast is used to determine regional irrigation demand. This irrigation demand is used in a regional hydrologic model to determine the hydrologic impact of irrigation. For the Southeast to maintain/expand irrigated agricultural production and provide adaptation to climate change and climate variability it will require integrated agricultural and hydrologic system models that can calculate irrigation demand and the impact of the this demand on the river hydrology. These integrated models can be used as (1) historical tools to examine vulnerability of expanded irrigation to past climate extremes (2) future tools to examine the sustainability of expanded irrigation under future climate scenarios and (3) a real-time tool to allow dynamic water resource management. Such tools are necessary to assure stakeholders and the public that irrigation can be carried out in a sustainable manner. The system tools to be discussed include a gridded version of the crop modeling system (DSSAT). The gridded model is referred to as GriDSSAT. The irrigation demand from GriDSSAT is coupled to a regional hydrologic model developed by the Eastern Forest Environmental Threat Assessment Center of the USDA Forest Service) (WaSSI). The crop model provides the dynamic irrigation demand which is a function of the weather. The hydrologic model includes all other competing uses of water. Examples of use the crop model coupled with the hydrologic model include historical analyses which show the change in hydrology as additional acres of irrigated land are added to water sheds. The first order change in hydrology is computed in terms of changes in the Water Availability Stress Index (WASSI) which is the ratio of water demand (irrigation, public water supply, industrial use, etc.) and water availability from the hydrologic model. Also, statistics such as the number of times certain WASSI thresholds are exceeded are calculated to show the impact of expanded irrigation during times of hydrologic drought and the coincident use of water by other sectors. Also, integrated downstream impacts of irrigation are also calculated through changes in flows through the whole river systems.
Uncertainty quantification for environmental models
Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming
2012-01-01
Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear
The implementation of AI technologies in computer wargames
NASA Astrophysics Data System (ADS)
Tiller, John A.
2004-08-01
Computer wargames involve the most in-depth analysis of general game theory. The enumerated turns of a game like chess are dwarfed by the exponentially larger possibilities of even a simple computer wargame. Implementing challenging AI is computer wargames is an important goal in both the commercial and military environments. In the commercial marketplace, customers demand a challenging AI opponent when they play a computer wargame and are frustrated by a lack of competence on the part of the AI. In the military environment, challenging AI opponents are important for several reasons. A challenging AI opponent will force the military professional to avoid routine or set-piece approaches to situations and cause them to think much deeper about military situations before taking action. A good AI opponent would also include national characteristics of the opponent being simulated, thus providing the military professional with even more of a challenge in planning and approach. Implementing current AI technologies in computer wargames is a technological challenge. The goal is to join the needs of AI in computer wargames with the solutions of current AI technologies. This talk will address several of those issues, possible solutions, and currently unsolved problems.
Ergonomic intervention for employed persons with rheumatic conditions.
Allaire, Saralynn J; Backman, Catherine L; Alheresh, Rawan; Baker, Nancy A
2013-01-01
Prior articles in this series on employment and arthritis have documented the major impact arthritis and other rheumatic conditions have on employment. As expected, physically demanding job tasks, including hand use, are substantial risk factors for work limitation. Computer use has been increasing. People with arthritis may choose occupations involving extensive computer use to avoid occupations with other physical demands. But studies show many people with arthritis conditions have difficulty using computers.Ergonomic assessment and implementation helps relieve the physical and other demands of jobs. The Ergonomic Assessment Tool for Arthritis (EATA) is specifically for people with arthritis conditions. Since the EATA can be conducted off worksite, it is feasible to use with workers not wishing to disclose their condition to their employer. Available research supports the effectiveness of ergonomic intervention as a viable method to reduce work limitation for persons with arthritis. Some workers will need additional vocational intervention to remain employed long term. However, ergonomic intervention is a useful first step, as it promotes awareness of arthritis effects on work activities. Assisting workers with arthritis or other rheumatic conditions to use ergonomics to enhance their ability to work well should be an important aspect of managing these conditions.
Global responses for recycling waste CRTs in e-waste.
Singh, Narendra; Li, Jinhui; Zeng, Xianlai
2016-11-01
The management of used cathode ray tube (CRT) devices is a major problem worldwide due to rapid uptake of the technology and early obsolescence of CRT devices, which is considered an environment hazard if disposed improperly. Previously, their production has grown in step with computer and television demand but later on with rapid technological innovation; TVs and computer screens has been replaced by new products such as Liquid Crystal Displays (LCDs) and Plasma Display Panel (PDPs). This change creates a large volume of waste stream of obsolete CRTs waste in developed countries and developing countries will be becoming major CRTs waste producers in the upcoming years. We studied that there is also high level of trans-boundary movement of these devices as second-hand electronic equipment into developing countries in an attempt to bridge the 'digital divide'. Moreover, the current global production of e-waste is estimated to be '41million tonnes per year' where a major part of the e-waste stream consists of CRT devices. This review article provides a concise overview of world's current CRTs waste scenario, namely magnitude of the demand and processing, current disposal and recycling operations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Flexible services for the support of research.
Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John
2013-01-28
Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.
Agricultural production and water use scenarios in Cyprus under global change
NASA Astrophysics Data System (ADS)
Bruggeman, Adriana; Zoumides, Christos; Camera, Corrado; Pashiardis, Stelios; Zomeni, Zomenia
2014-05-01
In many countries of the world, food demand exceeds the total agricultural production. In semi-arid countries, agricultural water demand often also exceeds the sustainable supply of water resources. These water-stressed countries are expected to become even drier, as a result of global climate change. This will have a significant impact on the future of the agricultural sector and on food security. The aim of the AGWATER project consortium is to provide recommendations for climate change adaptation for the agricultural sector in Cyprus and the wider Mediterranean region. Gridded climate data sets, with 1-km horizontal resolution were prepared for Cyprus for 1980-2010. Regional Climate Model results were statistically downscaled, with the help of spatial weather generators. A new soil map was prepared using a predictive modelling and mapping technique and a large spatial database with soil and environmental parameters. Stakeholder meetings with agriculture and water stakeholders were held to develop future water prices, based on energy scenarios and to identify climate resilient production systems. Green houses, including also hydroponic systems, grapes, potatoes, cactus pears and carob trees were the more frequently identified production systems. The green-blue-water model, based on the FAO-56 dual crop coefficient approach, has been set up to compute agricultural water demand and yields for all crop fields in Cyprus under selected future scenarios. A set of agricultural production and water use performance indicators are computed by the model, including green and blue water use, crop yield, crop water productivity, net value of crop production and economic water productivity. This work is part of the AGWATER project - AEIFORIA/GEOGRO/0311(BIE)/06 - co-financed by the European Regional Development Fund and the Republic of Cyprus through the Research Promotion Foundation.
A Scalable, Out-of-Band Diagnostics Architecture for International Space Station Systems Support
NASA Technical Reports Server (NTRS)
Fletcher, Daryl P.; Alena, Rick; Clancy, Daniel (Technical Monitor)
2002-01-01
The computational infrastructure of the International Space Station (ISS) is a dynamic system that supports multiple vehicle subsystems such as Caution and Warning, Electrical Power Systems and Command and Data Handling (C&DH), as well as scientific payloads of varying size and complexity. The dynamic nature of the ISS configuration coupled with the increased demand for payload support places a significant burden on the inherently resource constrained computational infrastructure of the ISS. Onboard system diagnostics applications are hosted on computers that are elements of the avionics network while ground-based diagnostic applications receive only a subset of available telemetry, down-linked via S-band communications. In this paper we propose a scalable, out-of-band diagnostics architecture for ISS systems support that uses a read-only connection for C&DH data acquisition, which provides a lower cost of deployment and maintenance (versus a higher criticality readwrite connection). The diagnostics processing burden is off-loaded from the avionics network to elements of the on-board LAN that have a lower overall cost of operation and increased computational capacity. A superset of diagnostic data, richer in content than the configured telemetry, is made available to Advanced Diagnostic System (ADS) clients running on wireless handheld devices, affording the crew greater mobility for troubleshooting and providing improved insight into vehicle state. The superset of diagnostic data is made available to the ground in near real-time via an out-of band downlink, providing a high level of fidelity between vehicle state and test, training and operational facilities on the ground.
Identification of task demands and usability issues in police use of mobile computing terminals.
Zahabi, Maryam; Kaber, David
2018-01-01
Crash reports from various states in the U.S. have shown high numbers of emergency vehicle crashes, especially in law enforcement situations. This study identified the perceived importance and frequency of police mobile computing terminal (MCT) tasks, quantified the demands of different tasks using a cognitive performance modeling methodology, identified usability violations of current MCT interface designs, and formulated design recommendations for an enhanced interface. Results revealed that "access call notes", "plate number check" and "find location on map" are the most important and frequently performed tasks for officers. "Reading plate information" was also found to be the most visually and cognitively demanding task-method. Usability principles of "using simple and natural dialog" and "minimizing user memory load" were violated by the current MCT interface design. The enhanced design showed potential for reducing cognitive demands and task completion time. Findings should be further validated using a driving simulation study. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets.
Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L
2014-01-01
As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
ERIC Educational Resources Information Center
Islam, Muhammad Faysal
2013-01-01
Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…
ERIC Educational Resources Information Center
Hennessey, Eden J. V.; Mueller, Julie; Beckett, Danielle; Fisher, Peter A.
2017-01-01
Given a growing digital economy with complex problems, demands are being made for education to address computational thinking (CT)--an approach to problem solving that draws on the tenets of computer science. We conducted a comprehensive content analysis of the Ontario elementary school curriculum documents for 44 CT-related terms to examine the…
An Undergraduate Computer Engineering Option for Electrical Engineering.
ERIC Educational Resources Information Center
National Academy of Engineering, Washington, DC. Commission on Education.
This report is the result of a study, funded by the National Science Foundation, of a group constituted as the COSINE Task Force on Undergraduate Education in Computer Engineering in 1969. The group was formed in response to the growing demand for education in computer engineering and the limited opportunities for study in this area. Computer…
NASA Astrophysics Data System (ADS)
Moro, A. C.; Nadesh, R. K.
2017-11-01
The cloud computing paradigm has transformed the way we do business in today’s world. Services on cloud have come a long way since just providing basic storage or software on demand. One of the fastest growing factor in this is mobile cloud computing. With the option of offloading now available to mobile users, mobile users can offload entire applications onto cloudlets. With the problems regarding availability and limited-storage capacity of these mobile cloudlets, it becomes difficult to decide for the mobile user when to use his local memory or the cloudlets. Hence, we take a look at a fast algorithm that decides whether the mobile user should go for cloudlet or rely on local memory based on an offloading probability. We have partially implemented the algorithm which decides whether the task can be carried out locally or given to a cloudlet. But as it becomes a burden on the mobile devices to perform the complete computation, so we look to offload this on to a cloud in our paper. Also further we use a file compression technique before sending the file onto the cloud to further reduce the load.
Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources
NASA Astrophysics Data System (ADS)
Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.
2011-12-01
Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.
Computational Modeling in Plasma Processing for 300 mm Wafers
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
Migration toward 300 mm wafer size has been initiated recently due to process economics and to meet future demands for integrated circuits. A major issue facing the semiconductor community at this juncture is development of suitable processing equipment, for example, plasma processing reactors that can accomodate 300 mm wafers. In this Invited Talk, scaling of reactors will be discussed with the aid of computational fluid dynamics results. We have undertaken reactor simulations using CFD with reactor geometry, pressure, and precursor flow rates as parameters in a systematic investigation. These simulations provide guidelines for scaling up in reactor design.
NASA Astrophysics Data System (ADS)
Gregory, A. E.; Benedict, K. K.; Zhang, S.; Savickas, J.
2017-12-01
Large scale, high severity wildfires in forests have become increasingly prevalent in the western United States due to fire exclusion. Although past work has focused on the immediate consequences of wildfire (ie. runoff magnitude and debris flow), little has been done to understand the post wildfire hydrologic consequences of vegetation regrowth. Furthermore, vegetation is often characterized by static parameterizations within hydrological models. In order to understand the temporal relationship between hydrologic processes and revegetation, we modularized and partially automated the hydrologic modeling process to increase connectivity between remotely sensed data, the Virtual Watershed Platform (a data management resource, called the VWP), input meteorological data, and the Precipitation-Runoff Modeling System (PRMS). This process was used to run simulations in the Valles Caldera of NM, an area impacted by the 2011 Las Conchas Fire, in PRMS before and after the Las Conchas to evaluate hydrologic process changes. The modeling environment addressed some of the existing challenges faced by hydrological modelers. At present, modelers are somewhat limited in their ability to push the boundaries of hydrologic understanding. Specific issues faced by modelers include limited computational resources to model processes at large spatial and temporal scales, data storage capacity and accessibility from the modeling platform, computational and time contraints for experimental modeling, and the skills to integrate modeling software in ways that have not been explored. By taking an interdisciplinary approach, we were able to address some of these challenges by leveraging the skills of hydrologic, data, and computer scientists; and the technical capabilities provided by a combination of on-demand/high-performance computing, distributed data, and cloud services. The hydrologic modeling process was modularized to include options for distributing meteorological data, parameter space experimentation, data format transformation, looping, validation of models and containerization for enabling new analytic scenarios. The user interacts with the modules through Jupyter Notebooks which can be connected to an on-demand computing and HPC environment, and data services built as part of the VWP.
Reliability models for dataflow computer systems
NASA Technical Reports Server (NTRS)
Kavi, K. M.; Buckles, B. P.
1985-01-01
The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.
Back to the future: virtualization of the computing environment at the W. M. Keck Observatory
NASA Astrophysics Data System (ADS)
McCann, Kevin L.; Birch, Denny A.; Holt, Jennifer M.; Randolph, William B.; Ward, Josephine A.
2014-07-01
Over its two decades of science operations, the W.M. Keck Observatory computing environment has evolved to contain a distributed hybrid mix of hundreds of servers, desktops and laptops of multiple different hardware platforms, O/S versions and vintages. Supporting the growing computing capabilities to meet the observatory's diverse, evolving computing demands within fixed budget constraints, presents many challenges. This paper describes the significant role that virtualization is playing in addressing these challenges while improving the level and quality of service as well as realizing significant savings across many cost areas. Starting in December 2012, the observatory embarked on an ambitious plan to incrementally test and deploy a migration to virtualized platforms to address a broad range of specific opportunities. Implementation to date has been surprisingly glitch free, progressing well and yielding tangible benefits much faster than many expected. We describe here the general approach, starting with the initial identification of some low hanging fruit which also provided opportunity to gain experience and build confidence among both the implementation team and the user community. We describe the range of challenges, opportunities and cost savings potential. Very significant among these was the substantial power savings which resulted in strong broad support for moving forward. We go on to describe the phasing plan, the evolving scalable architecture, some of the specific technical choices, as well as some of the individual technical issues encountered along the way. The phased implementation spans Windows and Unix servers for scientific, engineering and business operations, virtualized desktops for typical office users as well as more the more demanding graphics intensive CAD users. Other areas discussed in this paper include staff training, load balancing, redundancy, scalability, remote access, disaster readiness and recovery.
NASA Technical Reports Server (NTRS)
Himer, J. T.
1992-01-01
Fortran has largely enjoyed prominence for the past few decades as the computer programming language of choice for numerically intensive scientific, engineering, and process control applications. Fortran's well understood static language syntax has allowed resulting parsers and compiler optimizing technologies to often generate among the most efficient and fastest run-time executables, particularly on high-end scalar and vector supercomputers. Computing architectures and paradigms have changed considerably since the last ANSI/ISO Fortran release in 1978, and while FORTRAN 77 has more than survived, it's aged features provide only partial functionality for today's demanding computing environments. The simple block procedural languages have been necessarily evolving, or giving way, to specialized supercomputing, network resource, and object-oriented paradigms. To address these new computing demands, ANSI has worked for the last 12-years with three international public reviews to deliver Fortran 90. Fortran 90 has superseded and replaced ISO FORTRAN 77 internationally as the sole Fortran standard; while in the US, Fortran 90 is expected to be adopted as the ANSI standard this summer, coexisting with ANSI FORTRAN 77 until at least 1996. The development path and current state of Fortran will be briefly described highlighting the many new Fortran 90 syntactic and semantic additions which support (among others): free form source; array syntax; new control structures; modules and interfaces; pointers; derived data types; dynamic memory; enhanced I/O; operator overloading; data abstraction; user optional arguments; new intrinsics for array, bit manipulation, and system inquiry; and enhanced portability through better generic control of underlying system arithmetic models. Examples from dynamical astronomy, signal and image processing will attempt to illustrate Fortran 90's applicability to today's general scalar, vector, and parallel scientific and engineering requirements and object oriented programming paradigms. Time permitting, current work proceeding on the future development of Fortran 2000 and collateral standards will be introduced.
NASA Astrophysics Data System (ADS)
Abad Lopez, Carlos Adrian
Current electricity infrastructure is being stressed from several directions -- high demand, unreliable supply, extreme weather conditions, accidents, among others. Infrastructure planners have, traditionally, focused on only the cost of the system; today, resilience and sustainability are increasingly becoming more important. In this dissertation, we develop computational tools for efficiently managing electricity resources to help create a more reliable and sustainable electrical grid. The tools we present in this work will help electric utilities coordinate demand to allow the smooth and large scale integration of renewable sources of energy into traditional grids, as well as provide infrastructure planners and operators in developing countries a framework for making informed planning and control decisions in the presence of uncertainty. Demand-side management is considered as the most viable solution for maintaining grid stability as generation from intermittent renewable sources increases. Demand-side management, particularly demand response (DR) programs that attempt to alter the energy consumption of customers either by using price-based incentives or up-front power interruption contracts, is more cost-effective and sustainable in addressing short-term supply-demand imbalances when compared with the alternative that involves increasing fossil fuel-based fast spinning reserves. An essential step in compensating participating customers and benchmarking the effectiveness of DR programs is to be able to independently detect the load reduction from observed meter data. Electric utilities implementing automated DR programs through direct load control switches are also interested in detecting the reduction in demand to efficiently pinpoint non-functioning devices to reduce maintenance costs. We develop sparse optimization methods for detecting a small change in the demand for electricity of a customer in response to a price change or signal from the utility, dynamic learning methods for scheduling the maintenance of direct load control switches whose operating state is not directly observable and can only be inferred from the metered electricity consumption, and machine learning methods for accurately forecasting the load of hundreds of thousands of residential, commercial and industrial customers. These algorithms have been implemented in the software system provided by AutoGrid, Inc., and this system has helped several utilities in the Pacific Northwest, Oklahoma, California and Texas, provide more reliable power to their customers at significantly reduced prices. Providing power to widely spread out communities in developing countries using the conventional power grid is not economically feasible. The most attractive alternative source of affordable energy for these communities is solar micro-grids. We discuss risk-aware robust methods to optimally size and operate solar micro-grids in the presence of uncertain demand and uncertain renewable generation. These algorithms help system operators to increase their revenue while making their systems more resilient to inclement weather conditions.
IPv6 testing and deployment at Prague Tier 2
NASA Astrophysics Data System (ADS)
Kouba, Tomáŝ; Chudoba, Jiří; Eliáŝ, Marek; Fiala, Lukáŝ
2012-12-01
Computing Center of the Institute of Physics in Prague provides computing and storage resources for various HEP experiments (D0, Atlas, Alice, Auger) and currently operates more than 300 worker nodes with more than 2500 cores and provides more than 2PB of disk space. Our site is limited to one C-sized block of IPv4 addresses, and hence we had to move most of our worker nodes behind the NAT. However this solution demands more difficult routing setup. We see the IPv6 deployment as a solution that provides less routing, more switching and therefore promises higher network throughput. The administrators of the Computing Center strive to configure and install all provided services automatically. For installation tasks we use PXE and kickstart, for network configuration we use DHCP and for software configuration we use CFEngine. Many hardware boxes are configured via specific web pages or telnet/ssh protocol provided by the box itself. All our services are monitored with several tools e.g. Nagios, Munin, Ganglia. We rely heavily on the SNMP protocol for hardware health monitoring. All these installation, configuration and monitoring tools must be tested before we can switch completely to IPv6 network stack. In this contribution we present the tests we have made, limitations we have faced and configuration decisions that we have made during IPv6 testing. We also present testbed built on virtual machines that was used for all the testing and evaluation.
NASA Technical Reports Server (NTRS)
Farley, Douglas L.
2005-01-01
NASA's Aviation Safety and Security Program is pursuing research in on-board Structural Health Management (SHM) technologies for purposes of reducing or eliminating aircraft accidents due to system and component failures. Under this program, NASA Langley Research Center (LaRC) is developing a strain-based structural health-monitoring concept that incorporates a fiber optic-based measuring system for acquiring strain values. This fiber optic-based measuring system provides for the distribution of thousands of strain sensors embedded in a network of fiber optic cables. The resolution of strain value at each discrete sensor point requires a computationally demanding data reduction software process that, when hosted on a conventional processor, is not suitable for near real-time measurement. This report describes the development and integration of an alternative computing environment using dedicated computing hardware for performing the data reduction. Performance comparison between the existing and the hardware-based system is presented.
Workload Characterization of a Leadership Class Storage Cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul; Shipman, Galen M
2010-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize themore » system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.« less
Key Residential Building Equipment Technologies for Control and Grid Support PART I (Residential)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starke, Michael R; Onar, Omer C; DeVault, Robert C
2011-09-01
Electrical energy consumption of the residential sector is a crucial area of research that has in the past primarily focused on increasing the efficiency of household devices such as water heaters, dishwashers, air conditioners, and clothes washer and dryer units. However, the focus of this research is shifting as objectives such as developing the smart grid and ensuring that the power system remains reliable come to the fore, along with the increasing need to reduce energy use and costs. Load research has started to focus on mechanisms to support the power system through demand reduction and/or reliability services. The powermore » system relies on matching generation and load, and day-ahead and real-time energy markets capture most of this need. However, a separate set of grid services exist to address the discrepancies in load and generation arising from contingencies and operational mismatches, and to ensure that the transmission system is available for delivery of power from generation to load. Currently, these grid services are mostly provided by generation resources. The addition of renewable resources with their inherent variability can complicate the issue of power system reliability and lead to the increased need for grid services. Using load as a resource, through demand response programs, can fill the additional need for flexible resources and even reduce costly energy peaks. Loads have been shown to have response that is equal to or better than generation in some cases. Furthermore, price-incentivized demand response programs have been shown to reduce the peak energy requirements, thereby affecting the wholesale market efficiency and overall energy prices. The residential sector is not only the largest consumer of electrical energy in the United States, but also has the highest potential to provide demand reduction and power system support, as technological advancements in load control, sensor technologies, and communication are made. The prevailing loads based on the largest electrical energy consumers in the residential sector are space heating and cooling, washer and dryer, water heating, lighting, computers and electronics, dishwasher and range, and refrigeration. As the largest loads, these loads provide the highest potential for delivering demand response and reliability services. Many residential loads have inherent flexibility that is related to the purpose of the load. Depending on the load type, electric power consumption levels can either be ramped, changed in a step-change fashion, or completely removed. Loads with only on-off capability (such as clothes washers and dryers) provide less flexibility than resources that can be ramped or step-changed. Add-on devices may be able to provide extra demand response capabilities. Still, operating residential loads effectively requires awareness of the delicate balance of occupants health and comfort and electrical energy consumption. This report is Phase I of a series of reports aimed at identifying gaps in automated home energy management systems for incorporation of building appliances, vehicles, and renewable adoption into a smart grid, specifically with the intent of examining demand response and load factor control for power system support. The objective is to capture existing gaps in load control, energy management systems, and sensor technology with consideration of PHEV and renewable technologies to establish areas of research for the Department of Energy. In this report, (1) data is collected and examined from state of the art homes to characterize the primary residential loads as well as PHEVs and photovoltaic for potential adoption into energy management control strategies; and (2) demand response rules and requirements across the various demand response programs are examined for potential participation of residential loads. This report will be followed by a Phase II report aimed at identifying the current state of technology of energy management systems, sensors, and communication technologies for demand response and load factor control applications for the residential sector. The purpose is to cover the gaps that exist in the information captured by the sensors for energy management system to be able to provide demand response and load factor control. The vision is the development of an energy management system or other controlling enterprise hardware and software that is not only able to control loads, PHEVs, and renewable generation for demand response and load factor control, but also to do so with consumer comforts in mind and in an optimal fashion.« less
Mohammed, Ameer; Zamani, Majid; Bayford, Richard; Demosthenous, Andreas
2017-12-01
In Parkinson's disease (PD), on-demand deep brain stimulation is required so that stimulation is regulated to reduce side effects resulting from continuous stimulation and PD exacerbation due to untimely stimulation. Also, the progressive nature of PD necessitates the use of dynamic detection schemes that can track the nonlinearities in PD. This paper proposes the use of dynamic feature extraction and dynamic pattern classification to achieve dynamic PD detection taking into account the demand for high accuracy, low computation, and real-time detection. The dynamic feature extraction and dynamic pattern classification are selected by evaluating a subset of feature extraction, dimensionality reduction, and classification algorithms that have been used in brain-machine interfaces. A novel dimensionality reduction technique, the maximum ratio method (MRM) is proposed, which provides the most efficient performance. In terms of accuracy and complexity for hardware implementation, a combination having discrete wavelet transform for feature extraction, MRM for dimensionality reduction, and dynamic k-nearest neighbor for classification was chosen as the most efficient. It achieves a classification accuracy of 99.29%, an F1-score of 97.90%, and a choice probability of 99.86%.
MAT - MULTI-ATTRIBUTE TASK BATTERY FOR HUMAN OPERATOR WORKLOAD AND STRATEGIC BEHAVIOR RESEARCH
NASA Technical Reports Server (NTRS)
Comstock, J. R.
1994-01-01
MAT, a Multi-Attribute Task battery, gives the researcher the capability of performing multi-task workload and performance experiments. The battery provides a benchmark set of tasks for use in a wide range of laboratory studies of operator performance and workload. MAT incorporates tasks analogous to activities that aircraft crew members perform in flight, while providing a high degree of experiment control, performance data on each subtask, and freedom to use non-pilot test subjects. The MAT battery primary display is composed of four separate task windows which are as follows: a monitoring task window which includes gauges and warning lights, a tracking task window for the demands of manual control, a communication task window to simulate air traffic control communications, and a resource management task window which permits maintaining target levels on a fuel management task. In addition, a scheduling task window gives the researcher information about future task demands. The battery also provides the option of manual or automated control of tasks. The task generates performance data for each subtask. The task battery may be paused and onscreen workload rating scales presented to the subject. The MAT battery was designed to use a serially linked second computer to generate the voice messages for the Communications task. The MATREMX program and support files, which are included in the MAT package, were designed to work with the Heath Voice Card (Model HV-2000, available through the Heath Company, Benton Harbor, Michigan 49022); however, the MATREMX program and support files may easily be modified to work with other voice synthesizer or digitizer cards. The MAT battery task computer may also be used independent of the voice computer if no computer synthesized voice messages are desired or if some other method of presenting auditory messages is devised. MAT is written in QuickBasic and assembly language for IBM PC series and compatible computers running MS-DOS. The code in MAT is written for Microsoft QuickBasic 4.5 and Microsoft Macro Assembler 5.1. This package requires a joystick and EGA or VGA color graphics. An 80286, 386, or 486 processor machine is highly recommended. The standard distribution medium for MAT is a 5.25 inch 360K MS-DOS format diskette. The files are compressed using the PKZIP file compression utility. PKUNZIP is included on the distribution diskette. MAT was developed in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS, Microsoft QuickBasic, and Microsoft Macro Assembler are registered trademarks of Microsoft Corporation. PKZIP and PKUNZIP are registered trademarks of PKWare, Inc.
Multiplexing 200 spatial modes with a single hologram
NASA Astrophysics Data System (ADS)
Rosales-Guzmán, Carmelo; Bhebhe, Nkosiphile; Mahonisi, Nyiku; Forbes, Andrew
2017-11-01
The on-demand tailoring of light's spatial shape is of great relevance in a wide variety of research areas. Computer-controlled devices, such as spatial light modulators (SLMs) or digital micromirror devices, offer a very accurate, flexible and fast holographic means to this end. Remarkably, digital holography affords the simultaneous generation of multiple beams (multiplexing), a tool with numerous applications in many fields. Here, we provide a self-contained tutorial on light beam multiplexing. Through the use of several examples, the readers will be guided step by step in the process of light beam shaping and multiplexing. Additionally, we provide a quantitative analysis on the multiplexing capabilities of SLMs to assess the maximum number of beams that can be multiplexed on a single SLM, showing approximately 200 modes on a single hologram.
NASA Astrophysics Data System (ADS)
Koskinas, Aristotelis; Zacharopoulou, Eleni; Pouliasis, George; Engonopoulos, Ioannis; Mavroyeoryos, Konstantinos; Deligiannis, Ilias; Karakatsanis, Georgios; Dimitriadis, Panayiotis; Iliopoulou, Theano; Koutsoyiannis, Demetris; Tyralis, Hristos
2017-04-01
We simulate the electrical energy demand in the remote island of Astypalaia. To this end we first obtain information regarding the local socioeconomic conditions and energy demand. Secondly, the available hourly demand data are analysed at various time scales (hourly, weekly, daily, seasonal). The cross-correlations between the electrical energy demand and the mean daily temperature as well as other climatic variables for the same time period are computed. Also, we investigate the cross-correlation between those climatic variables and other variables related to renewable energy resources from numerous observations around the globe in order to assess the impact of each one to a hybrid renewable energy system. An exploratory data analysis including all variables is performed with the purpose to find hidden relationships. Finally, the demand is simulated considering all the periodicities found in the analysis. The simulation time series will be used in the development of a framework for planning of a hybrid renewable energy system in Astypalaia. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
NASA Astrophysics Data System (ADS)
Heilmann, B. Z.; Vallenilla Ferrara, A. M.
2009-04-01
The constant growth of contaminated sites, the unsustainable use of natural resources, and, last but not least, the hydrological risk related to extreme meteorological events and increased climate variability are major environmental issues of today. Finding solutions for these complex problems requires an integrated cross-disciplinary approach, providing a unified basis for environmental science and engineering. In computer science, grid computing is emerging worldwide as a formidable tool allowing distributed computation and data management with administratively-distant resources. Utilizing these modern High Performance Computing (HPC) technologies, the GRIDA3 project bundles several applications from different fields of geoscience aiming to support decision making for reasonable and responsible land use and resource management. In this abstract we present a geophysical application called EIAGRID that uses grid computing facilities to perform real-time subsurface imaging by on-the-fly processing of seismic field data and fast optimization of the processing workflow. Even though, seismic reflection profiling has a broad application range spanning from shallow targets in a few meters depth to targets in a depth of several kilometers, it is primarily used by the hydrocarbon industry and hardly for environmental purposes. The complexity of data acquisition and processing poses severe problems for environmental and geotechnical engineering: Professional seismic processing software is expensive to buy and demands large experience from the user. In-field processing equipment needed for real-time data Quality Control (QC) and immediate optimization of the acquisition parameters is often not available for this kind of studies. As a result, the data quality will be suboptimal. In the worst case, a crucial parameter such as receiver spacing, maximum offset, or recording time turns out later to be inappropriate and the complete acquisition campaign has to be repeated. The EIAGRID portal provides an innovative solution to this problem combining state-of-the-art data processing methods and modern remote grid computing technology. In field-processing equipment is substituted by remote access to high performance grid computing facilities. The latter can be ubiquitously controlled by a user-friendly web-browser interface accessed from the field by any mobile computer using wireless data transmission technology such as UMTS (Universal Mobile Telecommunications System) or HSUPA/HSDPA (High-Speed Uplink/Downlink Packet Access). The complexity of data-manipulation and processing and thus also the time demanding user interaction is minimized by a data-driven, and highly automated velocity analysis and imaging approach based on the Common-Reflection-Surface (CRS) stack. Furthermore, the huge computing power provided by the grid deployment allows parallel testing of alternative processing sequences and parameter settings, a feature which considerably reduces the turn-around times. A shared data storage using georeferencing tools and data grid technology is under current development. It will allow to publish already accomplished projects, making results, processing workflows and parameter settings available in a transparent and reproducible way. Creating a unified database shared by all users will facilitate complex studies and enable the use of data-crossing techniques to incorporate results of other environmental applications hosted on the GRIDA3 portal.
From photons to big-data applications: terminating terabits
2016-01-01
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. PMID:26809573
From photons to big-data applications: terminating terabits.
Zilberman, Noa; Moore, Andrew W; Crowcroft, Jon A
2016-03-06
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. © 2016 The Authors.
Johnson, M M
1990-03-01
This study explored the use of process tracing techniques in examining the decision-making processes of older and younger adults. Thirty-six college-age and thirty-six retirement-age participants decided which one of six cars they would purchase on the basis of computer-accessed data. They provided information search protocols. Results indicate that total time to reach a decision did not differ according to age. However, retirement-age participants used less information, spent more time viewing, and re-viewed fewer bits of information than college-age participants. Information search patterns differed markedly between age groups. Patterns of retirement-age adults indicated their use of noncompensatory decision rules which, according to decision-making literature (Payne, 1976), reduce cognitive processing demands. The patterns of the college-age adults indicated their use of compensatory decision rules, which have higher processing demands.
Research, Development and Validation of the Daily Demand Computer Schedule 360/50. Final Report.
ERIC Educational Resources Information Center
Ovard, Glen F.; Rowley, Vernon C.
A study was designed to further the research, development and validation of the Daily Demand Computer Schedule (DDCS), a system by which students can be rescheduled daily for facilitating their individual continuous progress through the curriculum. It will allow teachers to regroup students as needed based upon that progress, and will make time a…
Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo
2018-06-08
Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.
Parallelization of the Physical-Space Statistical Analysis System (PSAS)
NASA Technical Reports Server (NTRS)
Larson, J. W.; Guo, J.; Lyster, P. M.
1999-01-01
Atmospheric data assimilation is a method of combining observations with model forecasts to produce a more accurate description of the atmosphere than the observations or forecast alone can provide. Data assimilation plays an increasingly important role in the study of climate and atmospheric chemistry. The NASA Data Assimilation Office (DAO) has developed the Goddard Earth Observing System Data Assimilation System (GEOS DAS) to create assimilated datasets. The core computational components of the GEOS DAS include the GEOS General Circulation Model (GCM) and the Physical-space Statistical Analysis System (PSAS). The need for timely validation of scientific enhancements to the data assimilation system poses computational demands that are best met by distributed parallel software. PSAS is implemented in Fortran 90 using object-based design principles. The analysis portions of the code solve two equations. The first of these is the "innovation" equation, which is solved on the unstructured observation grid using a preconditioned conjugate gradient (CG) method. The "analysis" equation is a transformation from the observation grid back to a structured grid, and is solved by a direct matrix-vector multiplication. Use of a factored-operator formulation reduces the computational complexity of both the CG solver and the matrix-vector multiplication, rendering the matrix-vector multiplications as a successive product of operators on a vector. Sparsity is introduced to these operators by partitioning the observations using an icosahedral decomposition scheme. PSAS builds a large (approx. 128MB) run-time database of parameters used in the calculation of these operators. Implementing a message passing parallel computing paradigm into an existing yet developing computational system as complex as PSAS is nontrivial. One of the technical challenges is balancing the requirements for computational reproducibility with the need for high performance. The problem of computational reproducibility is well known in the parallel computing community. It is a requirement that the parallel code perform calculations in a fashion that will yield identical results on different configurations of processing elements on the same platform. In some cases this problem can be solved by sacrificing performance. Meeting this requirement and still achieving high performance is very difficult. Topics to be discussed include: current PSAS design and parallelization strategy; reproducibility issues; load balance vs. database memory demands, possible solutions to these problems.
HAL/SM language specification. [programming languages and computer programming for space shuttles
NASA Technical Reports Server (NTRS)
Williams, G. P. W., Jr.; Ross, C.
1975-01-01
A programming language is presented for the flight software of the NASA Space Shuttle program. It is intended to satisfy virtually all of the flight software requirements of the space shuttle. To achieve this, it incorporates a wide range of features, including applications-oriented data types and organizations, real time control mechanisms, and constructs for systems programming tasks. It is a higher order language designed to allow programmers, analysts, and engineers to communicate with the computer in a form approximating natural mathematical expression. Parts of the English language are combined with standard notation to provide a tool that readily encourages programming without demanding computer hardware expertise. Block diagrams and flow charts are included. The semantics of the language is discussed.
ArcGIS Framework for Scientific Data Analysis and Serving
NASA Astrophysics Data System (ADS)
Xu, H.; Ju, W.; Zhang, J.
2015-12-01
ArcGIS is a platform for managing, visualizing, analyzing, and serving geospatial data. Scientific data as part of the geospatial data features multiple dimensions (X, Y, time, and depth) and large volume. Multidimensional mosaic dataset (MDMD), a newly enhanced data model in ArcGIS, models the multidimensional gridded data (e.g. raster or image) as a hypercube and enables ArcGIS's capabilities to handle the large volume and near-real time scientific data. Built on top of geodatabase, the MDMD stores the dimension values and the variables (2D arrays) in a geodatabase table which allows accessing a slice or slices of the hypercube through a simple query and supports animating changes along time or vertical dimension using ArcGIS desktop or web clients. Through raster types, MDMD can manage not only netCDF, GRIB, and HDF formats but also many other formats or satellite data. It is scalable and can handle large data volume. The parallel geo-processing engine makes the data ingestion fast and easily. Raster function, definition of a raster processing algorithm, is a very important component in ArcGIS platform for on-demand raster processing and analysis. The scientific data analytics is achieved through the MDMD and raster function templates which perform on-demand scientific computation with variables ingested in the MDMD. For example, aggregating monthly average from daily data; computing total rainfall of a year; calculating heat index for forecasting data, and identifying fishing habitat zones etc. Addtionally, MDMD with the associated raster function templates can be served through ArcGIS server as image services which provide a framework for on-demand server side computation and analysis, and the published services can be accessed by multiple clients such as ArcMap, ArcGIS Online, JavaScript, REST, WCS, and WMS. This presentation will focus on the MDMD model and raster processing templates. In addtion, MODIS land cover, NDFD weather service, and HYCOM ocean model will be used to illustrate how ArcGIS platform and MDMD model can facilitate scientific data visualization and analytics and how the analysis results can be shared to more audience through ArcGIS Online and Portal.
Integrating photonics with silicon nanoelectronics for the next generation of systems on a chip.
Atabaki, Amir H; Moazeni, Sajjad; Pavanello, Fabio; Gevorgyan, Hayk; Notaros, Jelena; Alloatti, Luca; Wade, Mark T; Sun, Chen; Kruger, Seth A; Meng, Huaiyu; Al Qubaisi, Kenaish; Wang, Imbert; Zhang, Bohan; Khilo, Anatol; Baiocco, Christopher V; Popović, Miloš A; Stojanović, Vladimir M; Ram, Rajeev J
2018-04-01
Electronic and photonic technologies have transformed our lives-from computing and mobile devices, to information technology and the internet. Our future demands in these fields require innovation in each technology separately, but also depend on our ability to harness their complementary physics through integrated solutions 1,2 . This goal is hindered by the fact that most silicon nanotechnologies-which enable our processors, computer memory, communications chips and image sensors-rely on bulk silicon substrates, a cost-effective solution with an abundant supply chain, but with substantial limitations for the integration of photonic functions. Here we introduce photonics into bulk silicon complementary metal-oxide-semiconductor (CMOS) chips using a layer of polycrystalline silicon deposited on silicon oxide (glass) islands fabricated alongside transistors. We use this single deposited layer to realize optical waveguides and resonators, high-speed optical modulators and sensitive avalanche photodetectors. We integrated this photonic platform with a 65-nanometre-transistor bulk CMOS process technology inside a 300-millimetre-diameter-wafer microelectronics foundry. We then implemented integrated high-speed optical transceivers in this platform that operate at ten gigabits per second, composed of millions of transistors, and arrayed on a single optical bus for wavelength division multiplexing, to address the demand for high-bandwidth optical interconnects in data centres and high-performance computing 3,4 . By decoupling the formation of photonic devices from that of transistors, this integration approach can achieve many of the goals of multi-chip solutions 5 , but with the performance, complexity and scalability of 'systems on a chip' 1,6-8 . As transistors smaller than ten nanometres across become commercially available 9 , and as new nanotechnologies emerge 10,11 , this approach could provide a way to integrate photonics with state-of-the-art nanoelectronics.
"Small Talk Is Not Cheap": Phatic Computer-Mediated Communication in Intercultural Classes
ERIC Educational Resources Information Center
Maíz-Arévalo, Carmen
2017-01-01
The present study aims to analyse the phatic exchanges performed by a class of nine intercultural Master's students during a collaborative assignment which demanded online discussion using English as a lingua franca (ELF). Prior studies on the use of phatic communication in computer-mediated communication have concentrated on social networking…
Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array
NASA Astrophysics Data System (ADS)
Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul
2008-04-01
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.
Majeed, Raphael W; Stöhr, Mark R; Röhrig, Rainer
2012-01-01
Notifications and alerts play an important role in clinical daily routine. Rising prevalence of clinical decision support systems and electronic health records also result in increasing demands on notification systems. Failure adequately to communicate a critical value is a potential cause of adverse events. Critical laboratory values and changing vital data depend on timely notifications of medical staff. Vital monitors and medical devices rely on acoustic signals for alerting which are prone to "alert fatigue" and require medical staff to be present within audible range. Personal computers are unsuitable to display time critical notification messages, since the targeted medical staff are not always operating or watching the computer. On the other hand, mobile phones and smart devices enjoy increasing popularity. Previous notification systems sending text messages to mobile phones depend on asynchronous confirmations. By utilizing an automated telephony server, we provide a method to deliver notifications quickly and independently of the recipients' whereabouts while allowing immediate feedback and confirmations. Evaluation results suggest the feasibility of the proposed notification system for real-time notifications.
Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.
Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E
2012-03-19
A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them.
Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community
2012-01-01
Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them. PMID:22429538
Zhang, Wenchao; Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X
2016-05-01
The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.
Exploiting GPUs in Virtual Machine for BioCloud
Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon
2013-01-01
Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment. PMID:23710465
Exploiting GPUs in virtual machine for BioCloud.
Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon
2013-01-01
Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.
Moving code - Sharing geoprocessing logic on the Web
NASA Astrophysics Data System (ADS)
Müller, Matthias; Bernard, Lars; Kadner, Daniel
2013-09-01
Efficient data processing is a long-standing challenge in remote sensing. Effective and efficient algorithms are required for product generation in ground processing systems, event-based or on-demand analysis, environmental monitoring, and data mining. Furthermore, the increasing number of survey missions and the exponentially growing data volume in recent years have created demand for better software reuse as well as an efficient use of scalable processing infrastructures. Solutions that address both demands simultaneously have begun to slowly appear, but they seldom consider the possibility to coordinate development and maintenance efforts across different institutions, community projects, and software vendors. This paper presents a new approach to share, reuse, and possibly standardise geoprocessing logic in the field of remote sensing. Drawing from the principles of service-oriented design and distributed processing, this paper introduces moving-code packages as self-describing software components that contain algorithmic code and machine-readable descriptions of the provided functionality, platform, and infrastructure, as well as basic information about exploitation rights. Furthermore, the paper presents a lean publishing mechanism by which to distribute these packages on the Web and to integrate them in different processing environments ranging from monolithic workstations to elastic computational environments or "clouds". The paper concludes with an outlook toward community repositories for reusable geoprocessing logic and their possible impact on data-driven science in general.
A generic hydroeconomic model to assess future water scarcity
NASA Astrophysics Data System (ADS)
Neverre, Noémie; Dumas, Patrice
2015-04-01
We developed a generic hydroeconomic model able to confront future water supply and demand on a large scale, taking into account man-made reservoirs. The assessment is done at the scale of river basins, using only globally available data; the methodology can thus be generalized. On the supply side, we evaluate the impacts of climate change on water resources. The available quantity of water at each site is computed using the following information: runoff is taken from the outputs of CNRM climate model (Dubois et al., 2010), reservoirs are located using Aquastat, and the sub-basin flow-accumulation area of each reservoir is determined based on a Digital Elevation Model (HYDRO1k). On the demand side, agricultural and domestic demands are projected in terms of both quantity and economic value. For the agricultural sector, globally available data on irrigated areas and crops are combined in order to determine irrigated crops localization. Then, crops irrigation requirements are computed for the different stages of the growing season using Allen (1998) method with Hargreaves potential evapotranspiration. Irrigation water economic value is based on a yield comparison approach between rainfed and irrigated crops. Potential irrigated and rainfed yields are taken from LPJmL (Blondeau et al., 2007), or from FAOSTAT by making simple assumptions on yield ratios. For the domestic sector, we project the combined effects of demographic growth, economic development and water cost evolution on future demands. The method consists in building three-blocks inverse demand functions where volume limits of the blocks evolve with the level of GDP per capita. The value of water along the demand curve is determined from price-elasticity, price and demand data from the literature, using the point-expansion method, and from water costs data. Then projected demands are confronted to future water availability. Operating rules of the reservoirs and water allocation between demands are based on the maximization of water benefits, over time and space. A parameterisation-simulation-optimisation approach is used. This gives a projection of future water scarcity in the different locations and an estimation of the associated direct economic losses from unsatisfied demands. This generic hydroeconomic model can be easily applied to large-scale regions, in particular developing regions where little reliable data is available. We will present an application to Algeria, up to the 2050 horizon.
The Hico Image Processing System: A Web-Accessible Hyperspectral Remote Sensing Toolbox
NASA Astrophysics Data System (ADS)
Harris, A. T., III; Goodman, J.; Justice, B.
2014-12-01
As the quantity of Earth-observation data increases, the use-case for hosting analytical tools in geospatial data centers becomes increasingly attractive. To address this need, HySpeed Computing and Exelis VIS have developed the HICO Image Processing System, a prototype cloud computing system that provides online, on-demand, scalable remote sensing image processing capabilities. The system provides a mechanism for delivering sophisticated image processing analytics and data visualization tools into the hands of a global user community, who will only need a browser and internet connection to perform analysis. Functionality of the HICO Image Processing System is demonstrated using imagery from the Hyperspectral Imager for the Coastal Ocean (HICO), an imaging spectrometer located on the International Space Station (ISS) that is optimized for acquisition of aquatic targets. Example applications include a collection of coastal remote sensing algorithms that are directed at deriving critical information on water and habitat characteristics of our vulnerable coastal environment. The project leverages the ENVI Services Engine as the framework for all image processing tasks, and can readily accommodate the rapid integration of new algorithms, datasets and processing tools.
NASA Astrophysics Data System (ADS)
Kanta, L.; Berglund, E. Z.
2015-12-01
Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS) framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger (1) increases in the volume of water pumped through inter-basin transfers from an external reservoir and (2) drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.
Forecasting runout of rock and debris avalanches
Iverson, Richard M.; Evans, S.G.; Mugnozza, G.S.; Strom, A.; Hermanns, R.L.
2006-01-01
Physically based mathematical models and statistically based empirical equations each may provide useful means of forecasting runout of rock and debris avalanches. This paper compares the foundations, strengths, and limitations of a physically based model and a statistically based forecasting method, both of which were developed to predict runout across three-dimensional topography. The chief advantage of the physically based model results from its ties to physical conservation laws and well-tested axioms of soil and rock mechanics, such as the Coulomb friction rule and effective-stress principle. The output of this model provides detailed information about the dynamics of avalanche runout, at the expense of high demands for accurate input data, numerical computation, and experimental testing. In comparison, the statistical method requires relatively modest computation and no input data except identification of prospective avalanche source areas and a range of postulated avalanche volumes. Like the physically based model, the statistical method yields maps of predicted runout, but it provides no information on runout dynamics. Although the two methods differ significantly in their structure and objectives, insights gained from one method can aid refinement of the other.
Example-Based Super-Resolution Fluorescence Microscopy.
Jia, Shu; Han, Boran; Kutz, J Nathan
2018-04-23
Capturing biological dynamics with high spatiotemporal resolution demands the advancement in imaging technologies. Super-resolution fluorescence microscopy offers spatial resolution surpassing the diffraction limit to resolve near-molecular-level details. While various strategies have been reported to improve the temporal resolution of super-resolution imaging, all super-resolution techniques are still fundamentally limited by the trade-off associated with the longer image acquisition time that is needed to achieve higher spatial information. Here, we demonstrated an example-based, computational method that aims to obtain super-resolution images using conventional imaging without increasing the imaging time. With a low-resolution image input, the method provides an estimate of its super-resolution image based on an example database that contains super- and low-resolution image pairs of biological structures of interest. The computational imaging of cellular microtubules agrees approximately with the experimental super-resolution STORM results. This new approach may offer potential improvements in temporal resolution for experimental super-resolution fluorescence microscopy and provide a new path for large-data aided biomedical imaging.
Bringing Computational Thinking into the High School Science and Math Classroom
NASA Astrophysics Data System (ADS)
Trouille, Laura; Beheshti, E.; Horn, M.; Jona, K.; Kalogera, V.; Weintrop, D.; Wilensky, U.; University CT-STEM Project, Northwestern; University CenterTalent Development, Northwestern
2013-01-01
Computational thinking (for example, the thought processes involved in developing algorithmic solutions to problems that can then be automated for computation) has revolutionized the way we do science. The Next Generation Science Standards require that teachers support their students’ development of computational thinking and computational modeling skills. As a result, there is a very high demand among teachers for quality materials. Astronomy provides an abundance of opportunities to support student development of computational thinking skills. Our group has taken advantage of this to create a series of astronomy-based computational thinking lesson plans for use in typical physics, astronomy, and math high school classrooms. This project is funded by the NSF Computing Education for the 21st Century grant and is jointly led by Northwestern University’s Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), the Computer Science department, the Learning Sciences department, and the Office of STEM Education Partnerships (OSEP). I will also briefly present the online ‘Astro Adventures’ courses for middle and high school students I have developed through NU’s Center for Talent Development. The online courses take advantage of many of the amazing online astronomy enrichment materials available to the public, including a range of hands-on activities and the ability to take images with the Global Telescope Network. The course culminates with an independent computational research project.
Application of Cloud Computing at KTU: MS Live@Edu Case
ERIC Educational Resources Information Center
Miseviciene, Regina; Budnikas, Germanas; Ambraziene, Danute
2011-01-01
Cloud computing is a significant alternative in today's educational perspective. The technology gives the students and teachers the opportunity to quickly access various application platforms and resources through the web pages on-demand. Unfortunately, not all educational institutions often have an ability to take full advantages of the newest…
Computer Simulations as an Integral Part of Intermediate Macroeconomics.
ERIC Educational Resources Information Center
Millerd, Frank W.; Robertson, Alastair R.
1987-01-01
Describes the development of two interactive computer simulations which were fully integrated with other course materials. The simulations illustrate the effects of various real and monetary "demand shocks" on aggregate income, interest rates, and components of spending and economic output. Includes an evaluation of the simulations'…
The Ever-Present Demand for Public Computing Resources. CDS Spotlight
ERIC Educational Resources Information Center
Pirani, Judith A.
2014-01-01
This Core Data Service (CDS) Spotlight focuses on public computing resources, including lab/cluster workstations in buildings, virtual lab/cluster workstations, kiosks, laptop and tablet checkout programs, and workstation access in unscheduled classrooms. The findings are derived from 758 CDS 2012 participating institutions. A dataset of 529…
2001 Industry Studies: Information
2001-01-01
increasingly demand communications, computers, and software for use in the Internet , intranets, and extranets. Information technology (IT) - enabled...As the number of Internet users increases, so does the demand for the rapid deployment of information and telecommunication technologies . The key...proliferation has become uncontrollable. Only then will the US maintain the lead in the IT market . 13 ESSAYS ON MAJOR ISSUES ISSUE: THE INFORMATION TECHNOLOGY
An Ecological Framework for Cancer Communication: Implications for Research
Intille, Stephen S; Zabinski, Marion F
2005-01-01
The field of cancer communication has undergone a major revolution as a result of the Internet. As recently as the early 1990s, face-to-face, print, and the telephone were the dominant methods of communication between health professionals and individuals in support of the prevention and treatment of cancer. Computer-supported interactive media existed, but this usually required sophisticated computer and video platforms that limited availability. The introduction of point-and-click interfaces for the Internet dramatically improved the ability of non-expert computer users to obtain and publish information electronically on the Web. Demand for Web access has driven computer sales for the home setting and improved the availability, capability, and affordability of desktop computers. New advances in information and computing technologies will lead to similarly dramatic changes in the affordability and accessibility of computers. Computers will move from the desktop into the environment and onto the body. Computers are becoming smaller, faster, more sophisticated, more responsive, less expensive, and—essentially—ubiquitous. Computers are evolving into much more than desktop communication devices. New computers include sensing, monitoring, geospatial tracking, just-in-time knowledge presentation, and a host of other information processes. The challenge for cancer communication researchers is to acknowledge the expanded capability of the Web and to move beyond the approaches to health promotion, behavior change, and communication that emerged during an era when language- and image-based interpersonal and mass communication strategies predominated. Ecological theory has been advanced since the early 1900s to explain the highly complex relationships among individuals, society, organizations, the built and natural environments, and personal and population health and well-being. This paper provides background on ecological theory, advances an Ecological Model of Internet-Based Cancer Communication intended to broaden the vision of potential uses of the Internet for cancer communication, and provides some examples of how such a model might inform future research and development in cancer communication. PMID:15998614
TORC3: Token-ring clearing heuristic for currency circulation
NASA Astrophysics Data System (ADS)
Humes, Carlos, Jr.; Lauretto, Marcelo S.; Nakano, Fábio; Pereira, Carlos A. B.; Rafare, Guilherme F. G.; Stern, Julio Michael
2012-10-01
Clearing algorithms are at the core of modern payment systems, facilitating the settling of multilateral credit messages with (near) minimum transfers of currency. Traditional clearing procedures use batch processing based on MILP - mixed-integer linear programming algorithms. The MILP approach demands intensive computational resources; moreover, it is also vulnerable to operational risks generated by possible defaults during the inter-batch period. This paper presents TORC3 - the Token-Ring Clearing Algorithm for Currency Circulation. In contrast to the MILP approach, TORC3 is a real time heuristic procedure, demanding modest computational resources, and able to completely shield the clearing operation against the participating agents' risk of default.
NASA Astrophysics Data System (ADS)
Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.
2017-12-01
Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.
Reduction of peak energy demand based on smart appliances energy consumption adjustment
NASA Astrophysics Data System (ADS)
Powroźnik, P.; Szulim, R.
2017-08-01
In the paper the concept of elastic model of energy management for smart grid and micro smart grid is presented. For the proposed model a method for reducing peak demand in micro smart grid has been defined. The idea of peak demand reduction in elastic model of energy management is to introduce a balance between demand and supply of current power for the given Micro Smart Grid in the given moment. The results of the simulations studies were presented. They were carried out on real household data available on UCI Machine Learning Repository. The results may have practical application in the smart grid networks, where there is a need for smart appliances energy consumption adjustment. The article presents a proposal to implement the elastic model of energy management as the cloud computing solution. This approach of peak demand reduction might have application particularly in a large smart grid.
EBR-II high-ramp transients under computer control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrester, R.J.; Larson, H.A.; Christensen, L.J.
1983-01-01
During reactor run 122, EBR-II was subjected to 13 computer-controlled overpower transients at ramps of 4 MWt/s to qualify the facility and fuel for transient testing of LMFBR oxide fuels as part of the EBR-II operational-reliability-testing (ORT) program. A computer-controlled automatic control-rod drive system (ACRDS), designed by EBR-II personnel, permitted automatic control on demand power during the transients.
Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha
2016-02-27
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-03-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-01-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335
NASA Astrophysics Data System (ADS)
Matsypura, Dmytro
In this dissertation, I develop a new theoretical framework for the modeling, pricing analysis, and computation of solutions to electric power supply chains with power generators, suppliers, transmission service providers, and the inclusion of consumer demands. In particular, I advocate the application of finite-dimensional variational inequality theory, projected dynamical systems theory, game theory, network theory, and other tools that have been recently proposed for the modeling and analysis of supply chain networks (cf. Nagurney (2006)) to electric power markets. This dissertation contributes to the extant literature on the modeling, analysis, and solution of supply chain networks, including global supply chains, in general, and electric power supply chains, in particular, in the following ways. It develops a theoretical framework for modeling, pricing analysis, and computation of electric power flows/transactions in electric power systems using the rationale for supply chain analysis. The models developed include both static and dynamic ones. The dissertation also adds a new dimension to the methodology of the theory of projected dynamical systems by proving that, irrespective of the speeds of adjustment, the equilibrium of the system remains the same. Finally, I include alternative fuel suppliers, along with their behavior into the supply chain modeling and analysis framework. This dissertation has strong practical implications. In an era in which technology and globalization, coupled with increasing risk and uncertainty, complicate electricity demand and supply within and between nations, the successful management of electric power systems and pricing become increasingly pressing topics with relevance not only for economic prosperity but also national security. This dissertation addresses such related topics by providing models, pricing tools, and algorithms for decentralized electric power supply chains. This dissertation is based heavily on the following coauthored papers: Nagurney, Cruz, and Matsypura (2003), Nagurney and Matsypura (2004, 2005, 2006), Matsypura and Nagurney (2005), Matsypura, Nagurney, and Liu (2006).
NASA Astrophysics Data System (ADS)
Rebillat, Marc; Schoukens, Maarten
2018-05-01
Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.
Optimization of the Upper Surface of Hypersonic Vehicle Based on CFD Analysis
NASA Astrophysics Data System (ADS)
Gao, T. Y.; Cui, K.; Hu, S. C.; Wang, X. P.; Yang, G. W.
2011-09-01
For the hypersonic vehicle, the aerodynamic performance becomes more intensive. Therefore, it is a significant event to optimize the shape of the hypersonic vehicle to achieve the project demands. It is a key technology to promote the performance of the hypersonic vehicle with the method of shape optimization. Based on the existing vehicle, the optimization to the upper surface of the Simplified hypersonic vehicle was done to obtain a shape which suits the project demand. At the cruising condition, the upper surface was parameterized with the B-Spline curve method. The incremental parametric method and the reconstruction technology of the local mesh were applied here. The whole flow field was been calculated and the aerodynamic performance of the craft were obtained by the computational fluid dynamic (CFD) technology. Then the vehicle shape was optimized to achieve the maximum lift-drag ratio at attack angle 3°, 4° and 5°. The results will provide the reference for the practical design.
Accelerating Demand Paging for Local and Remote Out-of-Core Visualization
NASA Technical Reports Server (NTRS)
Ellsworth, David
2001-01-01
This paper describes a new algorithm that improves the performance of application-controlled demand paging for the out-of-core visualization of data sets that are on either local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The new algorithm can be applied to many different visualization algorithms since application-controlled demand paging is not specific to any visualization algorithm. The paper includes measurements that show that the new multi-threaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by up to 60%. Visualization runs using data from remote disk ran about as fast as ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.
Real-time interactive virtual tour on the World Wide Web (WWW)
NASA Astrophysics Data System (ADS)
Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi
2003-12-01
Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.
van Oosterom, Matthias N; van der Poel, Henk G; Navab, Nassir; van de Velde, Cornelis J H; van Leeuwen, Fijs W B
2018-03-01
To provide an overview of the developments made for virtual- and augmented-reality navigation procedures in urological interventions/surgery. Navigation efforts have demonstrated potential in the field of urology by supporting guidance for various disorders. The navigation approaches differ between the individual indications, but seem interchangeable to a certain extent. An increasing number of pre- and intra-operative imaging modalities has been used to create detailed surgical roadmaps, namely: (cone-beam) computed tomography, MRI, ultrasound, and single-photon emission computed tomography. Registration of these surgical roadmaps with the real-life surgical view has occurred in different forms (e.g. electromagnetic, mechanical, vision, or near-infrared optical-based), whereby the combination of approaches was suggested to provide superior outcome. Soft-tissue deformations demand the use of confirmatory interventional (imaging) modalities. This has resulted in the introduction of new intraoperative modalities such as drop-in US, transurethral US, (drop-in) gamma probes and fluorescence cameras. These noninvasive modalities provide an alternative to invasive technologies that expose the patients to X-ray doses. Whereas some reports have indicated navigation setups provide equal or better results than conventional approaches, most trials have been performed in relatively small patient groups and clear follow-up data are missing. The reported computer-assisted surgery research concepts provide a glimpse in to the future application of navigation technologies in the field of urology.
Lilford, R J; Bingham, P; Bourne, G L; Chard, T
1985-04-01
An inexpensive microcomputer has been programmed to obtain histories from patients attending a pregnancy termination clinic. The system is nurse-interactive; yes/no and multiple-choice questions are answered on the visual display unit by a light pen. Proper nouns and discursive text are typed at the computer keyboard. A neatly formatted summary of the history is then provided by an interfaced printer. The history follows a branching pattern; of the 370 questions included in the program, only 68 are answered in the course of an average history. The program contains numerous error traps and the user may request explanations of questions which are not immediately understood. The system was designed to ensure that no factors of anaesthetic or medical importance would be overlooked in the busy out-patient clinic. The computer provides a much more complete history with an average of 42 more items of information than the pre-existing manual system. This system is demanding of nursing time and possible conversion to a patient-interactive system is discussed. A confidential questionnaire revealed a high degree of consumer acceptance.
A fast discrete S-transform for biomedical signal processing.
Brown, Robert A; Frayne, Richard
2008-01-01
Determining the frequency content of a signal is a basic operation in signal and image processing. The S-transform provides both the true frequency and globally referenced phase measurements characteristic of the Fourier transform and also generates local spectra, as does the wavelet transform. Due to this combination, the S-transform has been successfully demonstrated in a variety of biomedical signal and image processing tasks. However, the computational demands of the S-transform have limited its application in medicine to this point in time. This abstract introduces the fast S-transform, a more efficient discrete implementation of the classic S-transform with dramatically reduced computational requirements.
Atlas2 Cloud: a framework for personal genome analysis in the cloud
2012-01-01
Background Until recently, sequencing has primarily been carried out in large genome centers which have invested heavily in developing the computational infrastructure that enables genomic sequence analysis. The recent advancements in next generation sequencing (NGS) have led to a wide dissemination of sequencing technologies and data, to highly diverse research groups. It is expected that clinical sequencing will become part of diagnostic routines shortly. However, limited accessibility to computational infrastructure and high quality bioinformatic tools, and the demand for personnel skilled in data analysis and interpretation remains a serious bottleneck. To this end, the cloud computing and Software-as-a-Service (SaaS) technologies can help address these issues. Results We successfully enabled the Atlas2 Cloud pipeline for personal genome analysis on two different cloud service platforms: a community cloud via the Genboree Workbench, and a commercial cloud via the Amazon Web Services using Software-as-a-Service model. We report a case study of personal genome analysis using our Atlas2 Genboree pipeline. We also outline a detailed cost structure for running Atlas2 Amazon on whole exome capture data, providing cost projections in terms of storage, compute and I/O when running Atlas2 Amazon on a large data set. Conclusions We find that providing a web interface and an optimized pipeline clearly facilitates usage of cloud computing for personal genome analysis, but for it to be routinely used for large scale projects there needs to be a paradigm shift in the way we develop tools, in standard operating procedures, and in funding mechanisms. PMID:23134663
Atlas2 Cloud: a framework for personal genome analysis in the cloud.
Evani, Uday S; Challis, Danny; Yu, Jin; Jackson, Andrew R; Paithankar, Sameer; Bainbridge, Matthew N; Jakkamsetti, Adinarayana; Pham, Peter; Coarfa, Cristian; Milosavljevic, Aleksandar; Yu, Fuli
2012-01-01
Until recently, sequencing has primarily been carried out in large genome centers which have invested heavily in developing the computational infrastructure that enables genomic sequence analysis. The recent advancements in next generation sequencing (NGS) have led to a wide dissemination of sequencing technologies and data, to highly diverse research groups. It is expected that clinical sequencing will become part of diagnostic routines shortly. However, limited accessibility to computational infrastructure and high quality bioinformatic tools, and the demand for personnel skilled in data analysis and interpretation remains a serious bottleneck. To this end, the cloud computing and Software-as-a-Service (SaaS) technologies can help address these issues. We successfully enabled the Atlas2 Cloud pipeline for personal genome analysis on two different cloud service platforms: a community cloud via the Genboree Workbench, and a commercial cloud via the Amazon Web Services using Software-as-a-Service model. We report a case study of personal genome analysis using our Atlas2 Genboree pipeline. We also outline a detailed cost structure for running Atlas2 Amazon on whole exome capture data, providing cost projections in terms of storage, compute and I/O when running Atlas2 Amazon on a large data set. We find that providing a web interface and an optimized pipeline clearly facilitates usage of cloud computing for personal genome analysis, but for it to be routinely used for large scale projects there needs to be a paradigm shift in the way we develop tools, in standard operating procedures, and in funding mechanisms.
The next generation of command post computing
NASA Astrophysics Data System (ADS)
Arnold, Ross D.; Lieb, Aaron J.; Samuel, Jason M.; Burger, Mitchell A.
2015-05-01
The future of command post computing demands an innovative new solution to address a variety of challenging operational needs. The Command Post of the Future is the Army's primary command and control decision support system, providing situational awareness and collaborative tools for tactical decision making, planning, and execution management from Corps to Company level. However, as the U.S. Army moves towards a lightweight, fully networked battalion, disconnected operations, thin client architecture and mobile computing become increasingly essential. The Command Post of the Future is not designed to support these challenges in the coming decade. Therefore, research into a hybrid blend of technologies is in progress to address these issues. This research focuses on a new command and control system utilizing the rich collaboration framework afforded by Command Post of the Future coupled with a new user interface consisting of a variety of innovative workspace designs. This new system is called Tactical Applications. This paper details a brief history of command post computing, presents the challenges facing the modern Army, and explores the concepts under consideration for Tactical Applications that meet these challenges in a variety of innovative ways.
Computational biomedicine: a challenge for the twenty-first century.
Coveney, Peter V; Shublaq, Nour W
2012-01-01
With the relentless increase of computer power and the widespread availability of digital patient-specific medical data, we are now entering an era when it is becoming possible to develop predictive models of human disease and pathology, which can be used to support and enhance clinical decision-making. The approach amounts to a grand challenge to computational science insofar as we need to be able to provide seamless yet secure access to large scale heterogeneous personal healthcare data in a facile way, typically integrated into complex workflows-some parts of which may need to be run on high performance computers-in a facile way that is integrated into clinical decision support software. In this paper, we review the state of the art in terms of case studies drawn from neurovascular pathologies and HIV/AIDS. These studies are representative of a large number of projects currently being performed within the Virtual Physiological Human initiative. They make demands of information technology at many scales, from the desktop to national and international infrastructures for data storage and processing, linked by high performance networks.
Creating CAD designs and performing their subsequent analysis using opensource solutions in Python
NASA Astrophysics Data System (ADS)
Iakushkin, Oleg O.; Sedova, Olga S.
2018-01-01
The paper discusses the concept of a system that encapsulates the transition from geometry building to strength tests. The solution we propose views the engineer as a programmer who is capable of coding the procedure for working with the modeli.e., to outline the necessary transformations and create cases for boundary conditions. We propose a prototype of such system. In our work, we used: Python programming language to create the program; Jupyter framework to create a single workspace visualization; pythonOCC library to implement CAD; FeniCS library to implement FEM; GMSH and VTK utilities. The prototype is launched on a platform which is a dynamically expandable multi-tenant cloud service providing users with all computing resources on demand. However, the system may be deployed locally for prototyping or work that does not involve resource-intensive computing. To make it possible, we used containerization, isolating the system in a Docker container.
Comparison of sound power radiation from isolated airfoils and cascades in a turbulent flow.
Blandeau, Vincent P; Joseph, Phillip F; Jenkins, Gareth; Powles, Christopher J
2011-06-01
An analytical model of the sound power radiated from a flat plate airfoil of infinite span in a 2D turbulent flow is presented. The effects of stagger angle on the radiated sound power are included so that the sound power radiated upstream and downstream relative to the fan axis can be predicted. Closed-form asymptotic expressions, valid at low and high frequencies, are provided for the upstream, downstream, and total sound power. A study of the effects of chord length on the total sound power at all reduced frequencies is presented. Excellent agreement for frequencies above a critical frequency is shown between the fast analytical isolated airfoil model presented in this paper and an existing, computationally demanding, cascade model, in which the unsteady loading of the cascade is computed numerically. Reasonable agreement is also observed at low frequencies for low solidity cascade configurations. © 2011 Acoustical Society of America
Exploring Architectural Details Through a Wearable Egocentric Vision Device
Alletto, Stefano; Abati, Davide; Serra, Giuseppe; Cucchiara, Rita
2016-01-01
Augmented user experiences in the cultural heritage domain are in increasing demand by the new digital native tourists of 21st century. In this paper, we propose a novel solution that aims at assisting the visitor during an outdoor tour of a cultural site using the unique first person perspective of wearable cameras. In particular, the approach exploits computer vision techniques to retrieve the details by proposing a robust descriptor based on the covariance of local features. Using a lightweight wearable board, the solution can localize the user with respect to the 3D point cloud of the historical landmark and provide him with information about the details at which he is currently looking. Experimental results validate the method both in terms of accuracy and computational effort. Furthermore, user evaluation based on real-world experiments shows that the proposal is deemed effective in enriching a cultural experience. PMID:26901197
Lazarou, Ioulietta; Nikolopoulos, Spiros; Petrantonakis, Panagiotis C.; Kompatsiaris, Ioannis; Tsolaki, Magda
2018-01-01
People with severe neurological impairments face many challenges in sensorimotor functions and communication with the environment; therefore they have increased demand for advanced, adaptive and personalized rehabilitation. During the last several decades, numerous studies have developed brain–computer interfaces (BCIs) with the goals ranging from providing means of communication to functional rehabilitation. Here we review the research on non-invasive, electroencephalography (EEG)-based BCI systems for communication and rehabilitation. We focus on the approaches intended to help severely paralyzed and locked-in patients regain communication using three different BCI modalities: slow cortical potentials, sensorimotor rhythms and P300 potentials, as operational mechanisms. We also review BCI systems for restoration of motor function in patients with spinal cord injury and chronic stroke. We discuss the advantages and limitations of these approaches and the challenges that need to be addressed in the future. PMID:29472849
Exploring Architectural Details Through a Wearable Egocentric Vision Device.
Alletto, Stefano; Abati, Davide; Serra, Giuseppe; Cucchiara, Rita
2016-02-17
Augmented user experiences in the cultural heritage domain are in increasing demand by the new digital native tourists of 21st century. In this paper, we propose a novel solution that aims at assisting the visitor during an outdoor tour of a cultural site using the unique first person perspective of wearable cameras. In particular, the approach exploits computer vision techniques to retrieve the details by proposing a robust descriptor based on the covariance of local features. Using a lightweight wearable board, the solution can localize the user with respect to the 3D point cloud of the historical landmark and provide him with information about the details at which he is currently looking. Experimental results validate the method both in terms of accuracy and computational effort. Furthermore, user evaluation based on real-world experiments shows that the proposal is deemed effective in enriching a cultural experience.
Infrastructure Systems for Advanced Computing in E-science applications
NASA Astrophysics Data System (ADS)
Terzo, Olivier
2013-04-01
In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.
NASA Astrophysics Data System (ADS)
Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.
2015-12-01
For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.
Moderators of the relationship between frequent family demands and inflammation among adolescents.
Levine, Cynthia S; Hoffer, Lauren C; Chen, Edith
2017-05-01
Frequent demands from others in relationships are associated with worse physiological and health outcomes. The present research investigated 2 potential moderators of the relationship between frequency of demands from one's family and inflammatory profiles among adolescents: (a) closeness of adolescents' relationships with their families, and (b) the frequency with which adolescents provided help to their families. Two hundred thirty-four adolescents, ages 13-16 (Mage = 14.53; 47.83% male), completed a daily dairy in which they reported on the frequency of demands made by family members. They were also interviewed about the closeness of their family relationships and reported in the daily diary on how frequently they provided help to their families. Adolescents also underwent a blood draw to assess low-grade inflammation and proinflammatory cytokine production in response to bacterial stimulation. More frequent demands from family predicted higher levels of low-grade inflammation and cytokine production in response to bacterial stimulation in adolescents. Family closeness moderated the relationship between frequent demands and stimulated cytokine production such that more frequent demands predicted higher cytokine production among adolescents who were closer to their families. Furthermore, frequency of providing help moderated the relationship between frequent demands and both low-grade inflammation and stimulated cytokine production, such that more frequent demands predicted worse inflammatory profiles among adolescents who provided more help to their families. These findings build on previous work on family demands and health to show under what circumstances family demands might have a physiological cost. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The EPOS Vision for the Open Science Cloud
NASA Astrophysics Data System (ADS)
Jeffery, Keith; Harrison, Matt; Cocco, Massimo
2016-04-01
Cloud computing offers dynamic elastic scalability for data processing on demand. For much research activity, demand for computing is uneven over time and so CLOUD computing offers both cost-effectiveness and capacity advantages. However, as reported repeatedly by the EC Cloud Expert Group, there are barriers to the uptake of Cloud Computing: (1) security and privacy; (2) interoperability (avoidance of lock-in); (3) lack of appropriate systems development environments for application programmers to characterise their applications to allow CLOUD middleware to optimize their deployment and execution. From CERN, the Helix-Nebula group has proposed the architecture for the European Open Science Cloud. They are discussing with other e-Infrastructure groups such as EGI (GRIDs), EUDAT (data curation), AARC (network authentication and authorisation) and also with the EIROFORUM group of 'international treaty' RIs (Research Infrastructures) and the ESFRI (European Strategic Forum for Research Infrastructures) RIs including EPOS. Many of these RIs are either e-RIs (electronic-RIs) or have an e-RI interface for access and use. The EPOS architecture is centred on a portal: ICS (Integrated Core Services). The architectural design already allows for access to e-RIs (which may include any or all of data, software, users and resources such as computers or instruments). Those within any one domain (subject area) of EPOS are considered within the TCS (Thematic Core Services). Those outside, or available across multiple domains of EPOS, are ICS-d (Integrated Core Services-Distributed) since the intention is that they will be used by any or all of the TCS via the ICS. Another such service type is CES (Computational Earth Science); effectively an ICS-d specializing in high performance computation, analytics, simulation or visualization offered by a TCS for others to use. Already discussions are underway between EPOS and EGI, EUDAT, AARC and Helix-Nebula for those offerings to be considered as ICS-ds by EPOS.. Provision of access to ICS-Ds from ICS-C concerns several aspects: (a) Technical : it may be more or less difficult to connect and pass from ICS-C to the ICS-d/ CES the 'package' (probably a virtual machine) of data and software; (b) Security/privacy : including passing personal information e.g. related to AAAI (Authentication, authorization, accounting Infrastructure); (c) financial and legal : such as payment, licence conditions; Appropriate interfaces from ICS-C to ICS-d are being designed to accommodate these aspects. The Open Science Cloud is timely because it provides a framework to discuss governance and sustainability for computational resource provision as well as an effective interpretation of federated approach to HPC(High Performance Computing) -HTC (High Throughput Computing). It will be a unique opportunity to share and adopt procurement policies to provide access to computational resources for RIs. The current state of discussions and expected roadmap for the EPOS-Open Science Cloud relationship are presented.
Achieving a Launch on Demand Capability
NASA Astrophysics Data System (ADS)
Greenberg, Joel S.
2002-01-01
The ability to place payloads [satellites] into orbit as and when required, often referred to as launch on demand, continues to be an elusive and yet largely unfulfilled goal. But what is the value of achieving launch on demand [LOD], and what metrics are appropriate? Achievement of a desired level of LOD capability must consider transportation system thruput, alternative transportation systems that comprise the transportation architecture, transportation demand, reliability and failure recovery characteristics of the alternatives, schedule guarantees, launch delays, payload integration schedules, procurement policies, and other factors. Measures of LOD capability should relate to the objective of the transportation architecture: the placement of payloads into orbit as and when required. Launch on demand capability must be defined in probabilistic terms such as the probability of not incurring a delay in excess of T when it is determined that it is necessary to place a payload into orbit. Three specific aspects of launch on demand are considered: [1] the ability to recover from adversity [i.e., a launch failure] and to keep up with the steady-state demand for placing satellites into orbit [this has been referred to as operability and resiliency], [2] the ability to respond to the requirement to launch a satellite when the need arises unexpectedly either because of an unexpected [random] on-orbit satellite failure that requires replacement or because of the sudden recognition of an unanticipated requirement, and [3] the ability to recover from adversity [i.e., a launch failure] during the placement of a constellation into orbit. The objective of this paper is to outline a formal approach for analyzing alternative transportation architectures in terms of their ability to provide a LOD capability. The economic aspect of LOD is developed by establishing a relationship between scheduling and the elimination of on-orbit spares while achieving the desired level of on-orbit availability. Results of an analysis are presented. The implications of launch on demand are addressed for each of the above three situations and related architecture performance metrics and computer simulation models are described that may be used to evaluate the implications of architecture and policy changes in terms of LOD requirements. The models and metrics are aimed at providing answers to such questions as: How well does a specified space transportation architecture respond to satellite launch demand and changes thereto? How well does a normally functioning and apparently architecture respond to unanticipated needs? What is the effect of a modification to the architecture on its ability to respond to satellite launch demand, including responding to unanticipated needs? What is the cost of the architecture [including facilities, operations, inventory, and satellites]? What is the sensitivity of overall architecture effectiveness and cost to various transportation system delays? What is the effect of adding [or eliminating] a launch vehicle or family of vehicles to [from] the architecture on its effectiveness and cost? What is the value of improving launch vehicle and satellite compatibility and what are the effects on probability of delay statistics and cost of designing for multi-launch vehicle compatibility
NASA Astrophysics Data System (ADS)
McGibbney, L. J.; Rittger, K.; Painter, T. H.; Selkowitz, D.; Mattmann, C. A.; Ramirez, P.
2014-12-01
As part of a JPL-USGS collaboration to expand distribution of essential climate variables (ECV) to include on-demand fractional snow cover we describe our experience and implementation of a shift towards the use of NVIDIA's CUDA® parallel computing platform and programming model. In particular the on-demand aspect of this work involves the improvement (via faster processing and a reduction in overall running times) for determination of fractional snow-covered area (fSCA) from Landsat TM/ETM+. Our observations indicate that processing tasks associated with remote sensing including the Snow Covered Area and Grain Size Model (SCAG) when applied to MODIS or LANDSAT TM/ETM+ are computationally intensive processes. We believe the shift to the CUDA programming paradigm represents a significant improvement in the ability to more quickly assert the outcomes of such activities. We use the TMSCAG model as our subject to highlight this argument. We do this by describing how we can ingest a LANDSAT surface reflectance image (typically provided in HDF format), perform spectral mixture analysis to produce land cover fractions including snow, vegetation and rock/soil whilst greatly reducing running time for such tasks. Within the scope of this work we first document the original workflow used to assert fSCA for Landsat TM and it's primary shortcomings. We then introduce the logic and justification behind the switch to the CUDA paradigm for running single as well as batch jobs on the GPU in order to achieve parallel processing. Finally we share lessons learned from the implementation of myriad of existing algorithms to a single set of code in a single target language as well as benefits this ultimately provides scientists at the USGS.
Evolving the Technical Infrastructure of the Planetary Data System for the 21st Century
NASA Technical Reports Server (NTRS)
Beebe, Reta F.; Crichton, D.; Hughes, S.; Grayzeck, E.
2010-01-01
The Planetary Data System (PDS) was established in 1989 as a distributed system to assure scientific oversight. Initially the PDS followed guidelines recommended by the National Academies Committee on Data Management and Computation (CODMAC, 1982) and placed emphasis on archiving validated datasets. But overtime user demands, supported by increased computing capabilities and communication methods, have placed increasing demands on the PDS. The PDS must add additional services to better enable scientific analysis within distributed environments and to ensure that those services integrate with existing systems and data. To face these challenges the Planetary Data System (PDS) must modernize its architecture and technical implementation. The PDS 2010 project addresses these challenges. As part of this project, the PDS has three fundamental project goals that include: (1) Providing more efficient client delivery of data by data providers to the PDS (2) Enabling a stable, long-term usable planetary science data archive (3) Enabling services for the data consumer to find, access and use the data they require in contemporary data formats. In order to achieve these goals, the PDS 2010 project is upgrading both the technical infrastructure and the data standards to support increased efficiency in data delivery as well as usability of the PDS. Efforts are underway to interface with missions as early as possible and to streamline the preparation and delivery of data to the PDS. Likewise, the PDS is working to define and plan for data services that will help researchers to perform analysis in cost-constrained environments. This presentation will cover the PDS 2010 project including the goals, data standards and technical implementation plans that are underway within the Planetary Data System. It will discuss the plans for moving from the current system, version PDS 3, to version PDS 4.
A service brokering and recommendation mechanism for better selecting cloud services.
Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan
2014-01-01
Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-07
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.
Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach.
Chertkov, Michael; Chernyak, Vladimir
2017-08-17
Thermostatically controlled loads, e.g., air conditioners and heaters, are by far the most widespread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control - changing from on to off, and vice versa, depending on temperature. We considered aggregation of a large group of similar devices into a statistical ensemble, where the devices operate following the same dynamics, subject to stochastic perturbations and randomized, Poisson on/off switching policy. Using theoretical and computational tools of statistical physics, we analyzed how the ensemble relaxes to a stationary distribution and established a relationship between the relaxation and the statistics of the probability flux associated with devices' cycling in the mixed (discrete, switch on/off, and continuous temperature) phase space. This allowed us to derive the spectrum of the non-equilibrium (detailed balance broken) statistical system and uncover how switching policy affects oscillatory trends and the speed of the relaxation. Relaxation of the ensemble is of practical interest because it describes how the ensemble recovers from significant perturbations, e.g., forced temporary switching off aimed at utilizing the flexibility of the ensemble to provide "demand response" services to change consumption temporarily to balance a larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.
Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach
Chertkov, Michael; Chernyak, Vladimir
2017-01-17
Thermostatically Controlled Loads (TCL), e.g. air-conditioners and heaters, are by far the most wide-spread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control of temperature - changing from on to off , and vice versa, depending on temperature. Aggregation of a large group of similar devices into a statistical ensemble is considered, where the devices operate following the same dynamics subject to stochastic perturbations and randomized, Poisson on/off switching policy. We analyze, using theoretical and computational tools of statistical physics, how the ensemble relaxes to a stationary distribution and establish relation between the re- laxationmore » and statistics of the probability flux, associated with devices' cycling in the mixed (discrete, switch on/off , and continuous, temperature) phase space. This allowed us to derive and analyze spec- trum of the non-equilibrium (detailed balance broken) statistical system. and uncover how switching policy affects oscillatory trend and speed of the relaxation. Relaxation of the ensemble is of a practical interest because it describes how the ensemble recovers from significant perturbations, e.g. forceful temporary switching o aimed at utilizing flexibility of the ensemble in providing "demand response" services relieving consumption temporarily to balance larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.« less
Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael; Chernyak, Vladimir
Thermostatically Controlled Loads (TCL), e.g. air-conditioners and heaters, are by far the most wide-spread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control of temperature - changing from on to off , and vice versa, depending on temperature. Aggregation of a large group of similar devices into a statistical ensemble is considered, where the devices operate following the same dynamics subject to stochastic perturbations and randomized, Poisson on/off switching policy. We analyze, using theoretical and computational tools of statistical physics, how the ensemble relaxes to a stationary distribution and establish relation between the re- laxationmore » and statistics of the probability flux, associated with devices' cycling in the mixed (discrete, switch on/off , and continuous, temperature) phase space. This allowed us to derive and analyze spec- trum of the non-equilibrium (detailed balance broken) statistical system. and uncover how switching policy affects oscillatory trend and speed of the relaxation. Relaxation of the ensemble is of a practical interest because it describes how the ensemble recovers from significant perturbations, e.g. forceful temporary switching o aimed at utilizing flexibility of the ensemble in providing "demand response" services relieving consumption temporarily to balance larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.« less
2014-01-01
Background Massively parallel DNA sequencing generates staggering amounts of data. Decreasing cost, increasing throughput, and improved annotation have expanded the diversity of genomics applications in research and clinical practice. This expanding scale creates analytical challenges: accommodating peak compute demand, coordinating secure access for multiple analysts, and sharing validated tools and results. Results To address these challenges, we have developed the Mercury analysis pipeline and deployed it in local hardware and the Amazon Web Services cloud via the DNAnexus platform. Mercury is an automated, flexible, and extensible analysis workflow that provides accurate and reproducible genomic results at scales ranging from individuals to large cohorts. Conclusions By taking advantage of cloud computing and with Mercury implemented on the DNAnexus platform, we have demonstrated a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples. PMID:24475911
Exploiting current-generation graphics hardware for synthetic-scene generation
NASA Astrophysics Data System (ADS)
Tanner, Michael A.; Keen, Wayne A.
2010-04-01
Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.
Cloud Service Selection Using Multicriteria Decision Analysis
Anuar, Nor Badrul; Shiraz, Muhammad; Haque, Israat Tanzeena
2014-01-01
Cloud computing (CC) has recently been receiving tremendous attention from the IT industry and academic researchers. CC leverages its unique services to cloud customers in a pay-as-you-go, anytime, anywhere manner. Cloud services provide dynamically scalable services through the Internet on demand. Therefore, service provisioning plays a key role in CC. The cloud customer must be able to select appropriate services according to his or her needs. Several approaches have been proposed to solve the service selection problem, including multicriteria decision analysis (MCDA). MCDA enables the user to choose from among a number of available choices. In this paper, we analyze the application of MCDA to service selection in CC. We identify and synthesize several MCDA techniques and provide a comprehensive analysis of this technology for general readers. In addition, we present a taxonomy derived from a survey of the current literature. Finally, we highlight several state-of-the-art practical aspects of MCDA implementation in cloud computing service selection. The contributions of this study are four-fold: (a) focusing on the state-of-the-art MCDA techniques, (b) highlighting the comparative analysis and suitability of several MCDA methods, (c) presenting a taxonomy through extensive literature review, and (d) analyzing and summarizing the cloud computing service selections in different scenarios. PMID:24696645
Cloud service selection using multicriteria decision analysis.
Whaiduzzaman, Md; Gani, Abdullah; Anuar, Nor Badrul; Shiraz, Muhammad; Haque, Mohammad Nazmul; Haque, Israat Tanzeena
2014-01-01
Cloud computing (CC) has recently been receiving tremendous attention from the IT industry and academic researchers. CC leverages its unique services to cloud customers in a pay-as-you-go, anytime, anywhere manner. Cloud services provide dynamically scalable services through the Internet on demand. Therefore, service provisioning plays a key role in CC. The cloud customer must be able to select appropriate services according to his or her needs. Several approaches have been proposed to solve the service selection problem, including multicriteria decision analysis (MCDA). MCDA enables the user to choose from among a number of available choices. In this paper, we analyze the application of MCDA to service selection in CC. We identify and synthesize several MCDA techniques and provide a comprehensive analysis of this technology for general readers. In addition, we present a taxonomy derived from a survey of the current literature. Finally, we highlight several state-of-the-art practical aspects of MCDA implementation in cloud computing service selection. The contributions of this study are four-fold: (a) focusing on the state-of-the-art MCDA techniques, (b) highlighting the comparative analysis and suitability of several MCDA methods, (c) presenting a taxonomy through extensive literature review, and (d) analyzing and summarizing the cloud computing service selections in different scenarios.
Achieving production-level use of HEP software at the Argonne Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.
2015-12-01
HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.
Advances in the Psychosocial Treatment of Addiction
Dallery, Jesse
2012-01-01
Synopsis The authors present an overview of empirically supported psychosocial interventions for individuals with substance use disorders (SUDs), including recent advances in the field. They also identify barriers to the adoption of evidence-based psychosocial treatments in community-based systems of care, and the promise of leveraging technology (computers, web, mobile phone, and emerging technologies) to markedly enhance the reach of these treatments. Technology-based interventions may provide “on-demand,” ubiquitous access to therapeutic support in diverse settings. A brief discussion of important next steps in developing, refining, and disseminating technology-delivered psychosocial interventions concludes the review. PMID:22640767
Ink-jet printing of silver metallization for photovoltaics
NASA Technical Reports Server (NTRS)
Vest, R. W.
1986-01-01
The status of the ink-jet printing program at Purdue University is described. The drop-on-demand printing system was modified to use metallo-organic decomposition (MOD) inks. Also, an IBM AT computer was integrated into the ink-jet printer system to provide operational functions and contact pattern configuration. The integration of the ink-jet printing system, problems encountered, and solutions derived were described in detail. The status of ink-jet printing using a MOD ink was discussed. The ink contained silver neodecanate and bismuth 2-ethylhexanoate dissolved in toluene; the MOD ink decomposition products being 99 wt% AG, and 1 wt% Bi.
NASA Automatic Information Security Handbook
NASA Technical Reports Server (NTRS)
1993-01-01
This handbook details the Automated Information Security (AIS) management process for NASA. Automated information system security is becoming an increasingly important issue for all NASA managers and with rapid advancements in computer and network technologies and the demanding nature of space exploration and space research have made NASA increasingly dependent on automated systems to store, process, and transmit vast amounts of mission support information, hence the need for AIS systems and management. This handbook provides the consistent policies, procedures, and guidance to assure that an aggressive and effective AIS programs is developed, implemented, and sustained at all NASA organizations and NASA support contractors.
Memory-Efficient Analysis of Dense Functional Connectomes.
Loewe, Kristian; Donohue, Sarah E; Schoenfeld, Mircea A; Kruse, Rudolf; Borgelt, Christian
2016-01-01
The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download.
Memory-Efficient Analysis of Dense Functional Connectomes
Loewe, Kristian; Donohue, Sarah E.; Schoenfeld, Mircea A.; Kruse, Rudolf; Borgelt, Christian
2016-01-01
The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download. PMID:27965565
TomoBank: a tomographic data repository for computational x-ray science
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; ...
2018-02-08
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less
NASA Astrophysics Data System (ADS)
Spiliotopoulos, I.; Mirmont, M.; Kruijff, M.
2008-08-01
This paper highlights the flight preparation and mission performance of a PC104-based On-Board Computer for ESA's second Young Engineer's Satellite (YES2), with additional attention to the flight software design and experience of QNX as multi-process real-time operating system. This combination of Commercial-Of-The-Shelf (COTS) technologies is an accessible option for small satellites with high computational demands.
Psychology of computer use: XXXII. Computer screen-savers as distractors.
Volk, F A; Halcomb, C G
1994-12-01
The differences in performance of 16 male and 16 female undergraduates on three cognitive tasks were investigated in the presence of visual distractors (computer-generated dynamic graphic images). These tasks included skilled and unskilled proofreading and listening comprehension. The visually demanding task of proofreading (skilled and unskilled) showed no significant decreases in performance in the distractor conditions. Results showed significant decrements, however, in performance on listening comprehension in at least one of the distractor conditions.
Cheng, Gui-Juan; Zhang, Xinhao; Chung, Lung Wa; Xu, Liping; Wu, Yun-Dong
2015-02-11
Understanding the mechanisms of chemical reactions, especially catalysis, has been an important and active area of computational organic chemistry, and close collaborations between experimentalists and theorists represent a growing trend. This Perspective provides examples of such productive collaborations. The understanding of various reaction mechanisms and the insight gained from these studies are emphasized. The applications of various experimental techniques in elucidation of reaction details as well as the development of various computational techniques to meet the demand of emerging synthetic methods, e.g., C-H activation, organocatalysis, and single electron transfer, are presented along with some conventional developments of mechanistic aspects. Examples of applications are selected to demonstrate the advantages and limitations of these techniques. Some challenges in the mechanistic studies and predictions of reactions are also analyzed.
Industrial Demand Module - NEMS Documentation
2014-01-01
Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Module. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code.
On the Bayesian Treed Multivariate Gaussian Process with Linear Model of Coregionalization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Lin, Guang
2015-02-01
The Bayesian treed Gaussian process (BTGP) has gained popularity in recent years because it provides a straightforward mechanism for modeling non-stationary data and can alleviate computational demands by fitting models to less data. The extension of BTGP to the multivariate setting requires us to model the cross-covariance and to propose efficient algorithms that can deal with trans-dimensional MCMC moves. In this paper we extend the cross-covariance of the Bayesian treed multivariate Gaussian process (BTMGP) to that of linear model of Coregionalization (LMC) cross-covariances. Different strategies have been developed to improve the MCMC mixing and invert smaller matrices in the Bayesianmore » inference. Moreover, we compare the proposed BTMGP with existing multiple BTGP and BTMGP in test cases and multiphase flow computer experiment in a full scale regenerator of a carbon capture unit. The use of the BTMGP with LMC cross-covariance helped to predict the computer experiments relatively better than existing competitors. The proposed model has a wide variety of applications, such as computer experiments and environmental data. In the case of computer experiments we also develop an adaptive sampling strategy for the BTMGP with LMC cross-covariance function.« less
Computational Methods for MOF/Polymer Membranes.
Erucar, Ilknur; Keskin, Seda
2016-04-01
Metal-organic framework (MOF)/polymer mixed matrix membranes (MMMs) have received significant interest in the last decade. MOFs are incorporated into polymers to make MMMs that exhibit improved gas permeability and selectivity compared with pure polymer membranes. The fundamental challenge in this area is to choose the appropriate MOF/polymer combinations for a gas separation of interest. Even if a single polymer is considered, there are thousands of MOFs that could potentially be used as fillers in MMMs. As a result, there has been a large demand for computational studies that can accurately predict the gas separation performance of MOF/polymer MMMs prior to experiments. We have developed computational approaches to assess gas separation potentials of MOF/polymer MMMs and used them to identify the most promising MOF/polymer pairs. In this Personal Account, we aim to provide a critical overview of current computational methods for modeling MOF/polymer MMMs. We give our perspective on the background, successes, and failures that led to developments in this area and discuss the opportunities and challenges of using computational methods for MOF/polymer MMMs. © 2016 The Chemical Society of Japan & Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics
2017-04-19
enforcement . The system was tested in the wild using video files as well as a commercial Video Management System supporting more than 100 surveillance...research were used to implement a distributed on-demand video analytics system that was prototyped for the use of forensics investigators in law...cameras as video sources. The architectural considerations of this system are presented. Issues to be reckoned with in implementing a scalable
NASA Astrophysics Data System (ADS)
Keen, A. S.; Lynett, P. J.; Ayca, A.
2016-12-01
Because of the damage resulting from the 2010 Chile and 2011 Japanese tele-tsunamis, the tsunami risk to the small craft marinas in California has become an important concern. The talk will outline an assessment tool which can be used to assess the tsunami hazard to small craft harbors. The methodology is based on the demand and structural capacity of the floating dock system, composed of floating docks/fingers and moored vessels. The structural demand is determined using a Monte Carlo methodology. Monte Carlo methodology is a probabilistic computational tool where the governing might be well known, but the independent variables of the input (demand) as well as the resisting structural components (capacity) may not be completely known. The Monte Carlo approach uses a distribution of each variable, and then uses that random variable within the described parameters, to generate a single computation. The process then repeats hundreds or thousands of times. The numerical model "Method of Splitting Tsunamis" (MOST) has been used to determine the inputs for the small craft harbors within California. Hydrodynamic model results of current speed, direction and surface elevation were incorporated via the drag equations to provide the bases of the demand term. To determine the capacities, an inspection program was developed to identify common features of structural components. A total of six harbors have been inspected ranging from Crescent City in Northern California to Oceanside Harbor in Southern California. Results from the inspection program were used to develop component capacity tables which incorporated the basic specifications of each component (e.g. bolt size and configuration) and a reduction factor (which accounts for the component reduction in capacity with age) to estimate in situ capacities. Like the demand term, these capacities are added probabilistically into the model. To date the model has been applied to Santa Cruz Harbor as well as Noyo River. Once calibrated, the model was able to hindcast the damage produced in Santa Cruz Harbor during the 2010 Chile and 2011 Japan events. Results of the Santa Cruz analysis will be presented and discussed.
Definition of performance specifications for automated Analytical Electrophoresis Facility (AAEF)
NASA Technical Reports Server (NTRS)
Brooks, D. E.
1976-01-01
In order to provide specifications for the automated Analytical Electrophoresis Facility (AAEF) that would satisfy the broadest variety of demands of a future user community, a survey was carried out of all those people who were identified as having published papers on cell electrophoresis in the past four years. A computer search was conducted of the relevant literature from which a list of 87 investigators was derived and defined as the user community for purposes of the mailing. A questionnaire was developed covering the areas of performance which required definition which was subsequently circulated to the user community. Based on the response to this survey performance specifications were assembled.
Exploiting the Potential of Data Centers in the Smart Grid
NASA Astrophysics Data System (ADS)
Wang, Xiaoying; Zhang, Yu-An; Liu, Xiaojing; Cao, Tengfei
As the number of cloud computing data centers grows rapidly in recent years, from the perspective of smart grid, they are really large and noticeable electric load. In this paper, we focus on the important role and the potential of data centers as controllable loads in the smart grid. We reviewed relevant research in the area of letting data centers participate in the ancillary services market and demand response programs of the grid, and further investigate the possibility of exploiting the impact of data center placement on the grid. Various opportunities and challenges are summarized, which could provide more chances for researches to explore this field.
Understanding Islamist political violence through computational social simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G
Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates themore » computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.« less
Senese, Francesca; Tubertini, Paolo; Mazzocchetti, Angelina; Lodi, Andrea; Ruozi, Corrado; Grilli, Roberto
2015-01-30
Italian regional health authorities annually negotiate the number of residency grants to be financed by the National government and the number and mix of supplementary grants to be funded by the regional budget. This study provides regional decision-makers with a requirement model to forecast the future demand of specialists at the regional level. We have developed a system dynamics (SD) model that projects the evolution of the supply of medical specialists and three demand scenarios across the planning horizon (2030). Demand scenarios account for different drivers: demography, service utilization rates (ambulatory care and hospital discharges) and hospital beds. Based on the SD outputs (occupational and training gaps), a mixed integer programming (MIP) model computes potentially effective assignments of medical specialization grants for each year of the projection. To simulate the allocation of grants, we have compared how regional and national grants can be managed in order to reduce future gaps with respect to current training patterns. The allocation of 25 supplementary grants per year does not appear as effective in reducing expected occupational gaps as the re-modulation of all regional training vacancies.
Neural Mechanisms for Adaptive Learned Avoidance of Mental Effort.
Mitsuto Nagase, Asako; Onoda, Keiichi; Clifford Foo, Jerome; Haji, Tomoki; Akaishi, Rei; Yamaguchi, Shuhei; Sakai, Katsuyuki; Morita, Kenji
2018-02-05
Humans tend to avoid mental effort. Previous studies have demonstrated this tendency using various demand-selection tasks; participants generally avoid options associated with higher cognitive demand. However, it remains unclear whether humans avoid mental effort adaptively in uncertain and non-stationary environments, and if so, what neural mechanisms underlie this learned avoidance and whether they remain the same irrespective of cognitive-demand types. We addressed these issues by developing novel demand-selection tasks where associations between choice options and cognitive-demand levels change over time, with two variations using mental arithmetic and spatial reasoning problems (29:4 and 18:2 males:females). Most participants showed avoidance, and their choices depended on the demand experienced on multiple preceding trials. We assumed that participants updated the expected cost of mental effort through experience, and fitted their choices by reinforcement learning models, comparing several possibilities. Model-based fMRI analyses revealed that activity in the dorsomedial and lateral frontal cortices was positively correlated with the trial-by-trial expected cost for the chosen option commonly across the different types of cognitive demand, and also revealed a trend of negative correlation in the ventromedial prefrontal cortex. We further identified correlates of cost-prediction-error at time of problem-presentation or answering the problem, the latter of which partially overlapped with or were proximal to the correlates of expected cost at time of choice-cue in the dorsomedial frontal cortex. These results suggest that humans adaptively learn to avoid mental effort, having neural mechanisms to represent expected cost and cost-prediction-error, and the same mechanisms operate for various types of cognitive demand. SIGNIFICANCE STATEMENT In daily life, humans encounter various cognitive demands, and tend to avoid high-demand options. However, it remains unclear whether humans avoid mental effort adaptively under dynamically changing environments, and if so, what are the underlying neural mechanisms and whether they operate irrespective of cognitive-demand types. To address these issues, we developed novel tasks, where participants could learn to avoid high-demand options under uncertain and non-stationary environments. Through model-based fMRI analyses, we found regions whose activity was correlated with the expected mental effort cost, or cost-prediction-error, regardless of demand-type, with overlap or adjacence in the dorsomedial frontal cortex. This finding contributes to clarifying the mechanisms for cognitive-demand avoidance, and provides empirical building blocks for the emerging computational theory of mental effort. Copyright © 2018 the authors.
7 CFR 982.40 - Marketing policy and volume regulation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... percentages. (b) Inshell trade demand. If the Board determines that volume regulation would tend to effectuate the declared policy of the act, it shall compute and announce an inshell trade demand for that year prior to September 20. The inshell trade demand shall equal the average of the preceding three years...
Expanding HPC and Research Computing--The Sustainable Way
ERIC Educational Resources Information Center
Grush, Mary
2009-01-01
Increased demands for research and high-performance computing (HPC)--along with growing expectations for cost and environmental savings--are putting new strains on the campus data center. More and more, CIOs like the University of Notre Dame's (Indiana) Gordon Wishon are seeking creative ways to build more sustainable models for data center and…
Computers and the Future of Skill Demand. Educational Research and Innovation Series
ERIC Educational Resources Information Center
Elliott, Stuart W.
2017-01-01
Computer scientists are working on reproducing all human skills using artificial intelligence, machine learning and robotics. Unsurprisingly then, many people worry that these advances will dramatically change work skills in the years ahead and perhaps leave many workers unemployable. This report develops a new approach to understanding these…
The AMTEX Partnership{trademark}. Fourth quarter FY95 report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-09-01
The AMTEX Partnership{trademark} is a collaborative research and development program among the US Integrated Textile Industry, the Department of Energy (DOE), the national laboratories, other federal agencies and laboratories, and universities. The goal of AMTEX is to strengthen the competitiveness of this vital industry, thereby preserving and creating US jobs. The operations and program management of the AMTEX Partnership{trademark} is provided by the Program Office. This report is produced by the Program Office on a quarterly basis and provides information on the progress, operations, and project management of the partnership. Progress is reported on the following projects: computer-aided fabric evaluation;more » cotton biotechnology; demand activated manufacturing architecture; electronic embedded fingerprints; on-line process control for flexible fiber manufacturing; rapid cutting; sensors for agile manufacturing; and textile resource conservation.« less
Marginal Bidding: An Application of the Equimarginal Principle to Bidding in TAC SCM
NASA Astrophysics Data System (ADS)
Greenwald, Amy; Naroditskiy, Victor; Odean, Tyler; Ramirez, Mauricio; Sodomka, Eric; Zimmerman, Joe; Cutler, Clark
We present a fast and effective bidding strategy for the Trading Agent Competition in Supply Chain Management (TAC SCM). In TAC SCM, manufacturers compete to procure computer parts from suppliers (the procurement problem), and then sell assembled computers to customers in reverse auctions (the bidding problem). This paper is concerned only with bidding, in which an agent must decide how many computers to sell and at what prices to sell them. We propose a greedy solution, Marginal Bidding, inspired by the Equimarginal Principle, which states that revenue is maximized among possible uses of a resource when the return on the last unit of the resource is the same across all areas of use. We show experimentally that certain variations of Marginal Bidding can compute bids faster than our ILP solution, which enables Marginal Bidders to consider future demand as well as current demand, and hence achieve greater revenues when knowledge of the future is valuable.
Belger, A; Banich, M T
1998-07-01
Because interaction of the cerebral hemispheres has been found to aid task performance under demanding conditions, the present study examined how this effect is moderated by computational complexity, the degree of lateralization for a task, and individual differences in asymmetric hemispheric activation (AHA). Computational complexity was manipulated across tasks either by increasing the number of inputs to be processed or by increasing the number of steps to a decision. Comparison of within- and across-hemisphere trials indicated that the size of the between-hemisphere advantage increased as a function of task complexity, except for a highly lateralized rhyme decision task that can only be performed by the left hemisphere. Measures of individual differences in AHA revealed that when task demands and an individual's AHA both load on the same hemisphere, the ability to divide the processing between the hemispheres is limited. Thus, interhemispheric division of processing improves performance at higher levels of computational complexity only when the required operations can be divided between the hemispheres.
Cancer-related Concerns of Spouses of Women with Breast Cancer
Fletcher, Kristin A.; Lewis, Frances Marcus; Haberman, Mel R.
2009-01-01
Objective To describe spouses' reported cancer-related demands attributed to their wife's breast cancer and to test the construct and predictive validity of a brief standardized measure of these demands. Methods Cross-sectional and longitudinal data were obtained from 151 spouses of women newly diagnosed with non-metastatic breast cancer. Descriptive statistics were computed to describe spouses' dominant cancer-related demands and multivariate regression analyses tested the construct and predictive validity of the standardized measure. Results Five categories of spouses' cancer-related demands were identified, such as concerns about: spouses' own functioning; wife's well being and response to treatment; couples' sexual activities; the family's and children's well-being; and the spouses' role in supporting their wives. A 33-item short version of the standardized measure of cancer demands demonstrated construct and predictive validity that was comparable to a 123-item version of the same questionnaire. Greater numbers of illness demands occurred when spouses were more depressed and had less confidence in their ability to manage the impact of the cancer (F=18.08 (3, 103), p<.001). Predictive validity was established by the short form's ability to significantly predict the quality of marital communication and spouses' self-efficacy at a two-month interval. Conclusion The short-version of the standardized measure of cancer-related demands shows promise for future application in clinic settings. Additional testing of the questionnaire is warranted. Spouses' breast cancer-related demands deserve attention by providers. In the absence of assisting them, spouses' illness pressures have deleterious consequences for the quality of marital communication and spouses' self-confidence. PMID:20014184
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig
2006-10-01
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a modelmore » is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.« less
NASA Astrophysics Data System (ADS)
Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.
2014-05-01
Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.
The Development of an Educational Cloud for IS Curriculum through a Student-Run Data Center
ERIC Educational Resources Information Center
Hwang, Drew; Pike, Ron; Manson, Dan
2016-01-01
The industry-wide emphasis on cloud computing has created a new focus in Information Systems (IS) education. As the demand for graduates with adequate knowledge and skills in cloud computing is on the rise, IS educators are facing a challenge to integrate cloud technology into their curricula. Although public cloud tools and services are available…
Automated protocols for spaceborne sub-meter resolution "Big Data" products for Earth Science
NASA Astrophysics Data System (ADS)
Neigh, C. S. R.; Carroll, M.; Montesano, P.; Slayback, D. A.; Wooten, M.; Lyapustin, A.; Shean, D. E.; Alexandrov, O.; Macander, M. J.; Tucker, C. J.
2017-12-01
The volume of available remotely sensed data has grown exceeding Petabytes per year and the cost for data, storage systems and compute power have both dropped exponentially. This has opened the door for "Big Data" processing systems with high-end computing (HEC) such as the Google Earth Engine, NASA Earth Exchange (NEX), and NASA Center for Climate Simulation (NCCS). At the same time, commercial very high-resolution (VHR) satellites have grown into a constellation with global repeat coverage that can support existing NASA Earth observing missions with stereo and super-spectral capabilities. Through agreements with the National Geospatial-Intelligence Agency NASA-Goddard Space Flight Center is acquiring Petabytes of global sub-meter to 4 meter resolution imagery from WorldView-1,2,3 Quickbird-2, GeoEye-1 and IKONOS-2 satellites. These data are a valuable no-direct cost for the enhancement of Earth observation research that supports US government interests. We are currently developing automated protocols for generating VHR products to support NASA's Earth observing missions. These include two primary foci: 1) on demand VHR 1/2° ortho mosaics - process VHR to surface reflectance, orthorectify and co-register multi-temporal 2 m multispectral imagery compiled as user defined regional mosaics. This will provide an easy access dataset to investigate biodiversity, tree canopy closure, surface water fraction, and cropped area for smallholder agriculture; and 2) on demand VHR digital elevation models (DEMs) - process stereo VHR to extract VHR DEMs with the NASA Ames stereo pipeline. This will benefit Earth surface studies on the cryosphere (glacier mass balance, flow rates and snow depth), hydrology (lake/water body levels, landslides, subsidence) and biosphere (forest structure, canopy height/cover) among others. Recent examples of products used in NASA Earth Science projects will be provided. This HEC API could foster surmounting prior spatial-temporal limitations while providing broad benefits to Earth Science.
Exact parallel algorithms for some members of the traveling salesman problem family
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pekny, J.F.
1989-01-01
The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less
Tools and Techniques for Basin-Scale Climate Change Assessment
NASA Astrophysics Data System (ADS)
Zagona, E.; Rajagopalan, B.; Oakley, W.; Wilson, N.; Weinstein, P.; Verdin, A.; Jerla, C.; Prairie, J. R.
2012-12-01
The Department of Interior's WaterSMART Program seeks to secure and stretch water supplies to benefit future generations and identify adaptive measures to address climate change. Under WaterSMART, Basin Studies are comprehensive water studies to explore options for meeting projected imbalances in water supply and demand in specific basins. Such studies could be most beneficial with application of recent scientific advances in climate projections, stochastic simulation, operational modeling and robust decision-making, as well as computational techniques to organize and analyze many alternatives. A new integrated set of tools and techniques to facilitate these studies includes the following components: Future supply scenarios are produced by the Hydrology Simulator, which uses non-parametric K-nearest neighbor resampling techniques to generate ensembles of hydrologic traces based on historical data, optionally conditioned on long paleo reconstructed data using various Markov Chain techniuqes. Resampling can also be conditioned on climate change projections from e.g., downscaled GCM projections to capture increased variability; spatial and temporal disaggregation is also provided. The simulations produced are ensembles of hydrologic inputs to the RiverWare operations/infrastucture decision modeling software. Alternative demand scenarios can be produced with the Demand Input Tool (DIT), an Excel-based tool that allows modifying future demands by groups such as states; sectors, e.g., agriculture, municipal, energy; and hydrologic basins. The demands can be scaled at future dates or changes ramped over specified time periods. Resulting data is imported directly into the decision model. Different model files can represent infrastructure alternatives and different Policy Sets represent alternative operating policies, including options for noticing when conditions point to unacceptable vulnerabilities, which trigger dynamically executing changes in operations or other options. The over-arching Study Manager provides a graphical tool to create combinations of future supply scenarios, demand scenarios, infrastructure and operating policy alternatives; each scenario is executed as an ensemble of RiverWare runs, driven by the hydrologic supply. The Study Manager sets up and manages multiple executions on multi-core hardware. The sizeable are typically direct model outputs, or post-processed indicators of performance based on model outputs. Post processing statistical analysis of the outputs are possible using the Graphical Policy Analysis Tool or other statistical packages. Several Basin Studies undertaken have used RiverWare to evaluate future scenarios. The Colorado River Basin Study, the most complex and extensive to date, has taken advantage of these tools and techniques to generate supply scenarios, produce alternative demand scenarios and to set up and execute the many combinations of supplies, demands, policies, and infrastructure alternatives. The tools and techniques will be described with example applications.
Optimum spaceborne computer system design by simulation
NASA Technical Reports Server (NTRS)
Williams, T.; Kerner, H.; Weatherbee, J. E.; Taylor, D. S.; Hodges, B.
1973-01-01
A deterministic simulator is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Its use as a tool to study and determine the minimum computer system configuration necessary to satisfy the on-board computational requirements of a typical mission is presented. The paper describes how the computer system configuration is determined in order to satisfy the data processing demand of the various shuttle booster subsytems. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources.
Are Cloud Environments Ready for Scientific Applications?
NASA Astrophysics Data System (ADS)
Mehrotra, P.; Shackleford, K.
2011-12-01
Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to multiple cloud environments including NASA's Nebula environment, Amazon's EC2, Magellan at NERSC, and SGI's Cyclone system. We critically examined the performance of the applications on these systems. We also collected information on the usability of these cloud environments. In this talk we will present the results of our study focusing on the efficacy of using clouds for NASA's scientific applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herter, Karen; Rasin, Josh; Perry, Tim
2009-11-30
The goal of this study was to demonstrate a demand response system that can signal nearly every customer in all sectors through the integration of two widely available and non- proprietary communications technologies--Open Automated Demand Response (OpenADR) over lnternet protocol and Utility Messaging Channel (UMC) over FM radio. The outcomes of this project were as follows: (1) a software bridge to allow translation of pricing signals from OpenADR to UMC; and (2) a portable demonstration unit with an lnternet-connected notebook computer, a portfolio of DR-enabling technologies, and a model home. The demonstration unit provides visitors the opportunity to send electricity-pricingmore » information over the lnternet (through OpenADR and UMC) and then watch as the model appliances and lighting respond to the signals. The integration of OpenADR and UMC completed and demonstrated in this study enables utilities to send hourly or sub-hourly electricity pricing information simultaneously to the residential, commercial and industrial sectors.« less
Fluid dynamic modeling of nano-thermite reactions
NASA Astrophysics Data System (ADS)
Martirosyan, Karen S.; Zyskin, Maxim; Jenkins, Charles M.; Yuki Horie, Yasuyuki
2014-03-01
This paper presents a direct numerical method based on gas dynamic equations to predict pressure evolution during the discharge of nanoenergetic materials. The direct numerical method provides for modeling reflections of the shock waves from the reactor walls that generates pressure-time fluctuations. The results of gas pressure prediction are consistent with the experimental evidence and estimates based on the self-similar solution. Artificial viscosity provides sufficient smoothing of shock wave discontinuity for the numerical procedure. The direct numerical method is more computationally demanding and flexible than self-similar solution, in particular it allows study of a shock wave in its early stage of reaction and allows the investigation of "slower" reactions, which may produce weaker shock waves. Moreover, numerical results indicate that peak pressure is not very sensitive to initial density and reaction time, providing that all the material reacts well before the shock wave arrives at the end of the reactor.
Fluid dynamic modeling of nano-thermite reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martirosyan, Karen S., E-mail: karen.martirosyan@utb.edu; Zyskin, Maxim; Jenkins, Charles M.
2014-03-14
This paper presents a direct numerical method based on gas dynamic equations to predict pressure evolution during the discharge of nanoenergetic materials. The direct numerical method provides for modeling reflections of the shock waves from the reactor walls that generates pressure-time fluctuations. The results of gas pressure prediction are consistent with the experimental evidence and estimates based on the self-similar solution. Artificial viscosity provides sufficient smoothing of shock wave discontinuity for the numerical procedure. The direct numerical method is more computationally demanding and flexible than self-similar solution, in particular it allows study of a shock wave in its early stagemore » of reaction and allows the investigation of “slower” reactions, which may produce weaker shock waves. Moreover, numerical results indicate that peak pressure is not very sensitive to initial density and reaction time, providing that all the material reacts well before the shock wave arrives at the end of the reactor.« less
Effects of Regulation and Technology on End Uses of Nonfuel Mineral Commodities in the United States
Matos, Grecia R.
2007-01-01
The regulatory system and advancement of technologies have shaped the end-use patterns of nonfuel minerals used in the United States. These factors affected the quantities and types of materials used by society. Environmental concerns and awareness of possible negative effects on public health prompted numerous regulations that have dramatically altered the use of commodities like arsenic, asbestos, lead, and mercury. While the selected commodities represent only a small portion of overall U.S. materials use, they have the potential for harmful effects on human health or the environment, which other commodities, like construction aggregates, do not normally have. The advancement of technology allowed for new uses of mineral materials in products like high-performance computers, telecommunications equipment, plasma and liquid-crystal display televisions and computer monitors, mobile telephones, and electronic devices, which have become mainstream products. These technologies altered the end-use pattern of mineral commodities like gallium, germanium, indium, and strontium. Human ingenuity and people?s demand for different and creative services increase the demand for new materials and industries while shifting the pattern of use of mineral commodities. The mineral commodities? end-use data are critical for the understanding of the magnitude and character of these flows, assessing their impact on the environment, and providing an early warning of potential problems in waste management of products containing these commodities. The knowledge of final disposition of the mineral commodity allows better decisions as to how regulation should be tailored.
Computation of elementary modes: a unifying framework and the new binary approach
Gagneur, Julien; Klamt, Steffen
2004-01-01
Background Metabolic pathway analysis has been recognized as a central approach to the structural analysis of metabolic networks. The concept of elementary (flux) modes provides a rigorous formalism to describe and assess pathways and has proven to be valuable for many applications. However, computing elementary modes is a hard computational task. In recent years we assisted in a multiplication of algorithms dedicated to it. We require a summarizing point of view and a continued improvement of the current methods. Results We show that computing the set of elementary modes is equivalent to computing the set of extreme rays of a convex cone. This standard mathematical representation provides a unified framework that encompasses the most prominent algorithmic methods that compute elementary modes and allows a clear comparison between them. Taking lessons from this benchmark, we here introduce a new method, the binary approach, which computes the elementary modes as binary patterns of participating reactions from which the respective stoichiometric coefficients can be computed in a post-processing step. We implemented the binary approach in FluxAnalyzer 5.1, a software that is free for academics. The binary approach decreases the memory demand up to 96% without loss of speed giving the most efficient method available for computing elementary modes to date. Conclusions The equivalence between elementary modes and extreme ray computations offers opportunities for employing tools from polyhedral computation for metabolic pathway analysis. The new binary approach introduced herein was derived from this general theoretical framework and facilitates the computation of elementary modes in considerably larger networks. PMID:15527509
Exact posterior computation in non-conjugate Gaussian location-scale parameters models
NASA Astrophysics Data System (ADS)
Andrade, J. A. A.; Rathie, P. N.
2017-12-01
In Bayesian analysis the class of conjugate models allows to obtain exact posterior distributions, however this class quite restrictive in the sense that it involves only a few distributions. In fact, most of the practical applications involves non-conjugate models, thus approximate methods, such as the MCMC algorithms, are required. Although these methods can deal with quite complex structures, some practical problems can make their applications quite time demanding, for example, when we use heavy-tailed distributions, convergence may be difficult, also the Metropolis-Hastings algorithm can become very slow, in addition to the extra work inevitably required on choosing efficient candidate generator distributions. In this work, we draw attention to the special functions as a tools for Bayesian computation, we propose an alternative method for obtaining the posterior distribution in Gaussian non-conjugate models in an exact form. We use complex integration methods based on the H-function in order to obtain the posterior distribution and some of its posterior quantities in an explicit computable form. Two examples are provided in order to illustrate the theory.
Transmitted wavefront testing with large dynamic range based on computer-aided deflectometry
NASA Astrophysics Data System (ADS)
Wang, Daodang; Xu, Ping; Gong, Zhidong; Xie, Zhongmin; Liang, Rongguang; Xu, Xinke; Kong, Ming; Zhao, Jun
2018-06-01
The transmitted wavefront testing technique is demanded for the performance evaluation of transmission optics and transparent glass, in which the achievable dynamic range is a key issue. A computer-aided deflectometric testing method with fringe projection is proposed for the accurate testing of transmitted wavefronts with a large dynamic range. Ray tracing of the modeled testing system is carried out to achieve the virtual ‘null’ testing of transmitted wavefront aberrations. The ray aberration is obtained from the ray tracing result and measured slope, with which the test wavefront aberration can be reconstructed. To eliminate testing system modeling errors, a system geometry calibration based on computer-aided reverse optimization is applied to realize accurate testing. Both numerical simulation and experiments have been carried out to demonstrate the feasibility and high accuracy of the proposed testing method. The proposed testing method can achieve a large dynamic range compared with the interferometric method, providing a simple, low-cost and accurate way for the testing of transmitted wavefronts from various kinds of optics and a large amount of industrial transmission elements.
CTserver: A Computational Thermodynamics Server for the Geoscience Community
NASA Astrophysics Data System (ADS)
Kress, V. C.; Ghiorso, M. S.
2006-12-01
The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed architecture involves CFD computation of magma convection at Volcan Villarrica with magma properties and phase proportions calculated at each spatial node and at each time step via distributed function calls to MELTS-objects executing on the CTserver. Documentation and programming examples are provided at http://ctserver.ofm- research.org.
NASA Astrophysics Data System (ADS)
Aiftimiei, D. C.; Antonacci, M.; Bagnasco, S.; Boccali, T.; Bucchi, R.; Caballer, M.; Costantini, A.; Donvito, G.; Gaido, L.; Italiano, A.; Michelotto, D.; Panella, M.; Salomoni, D.; Vallero, S.
2017-10-01
One of the challenges a scientific computing center has to face is to keep delivering well consolidated computational frameworks (i.e. the batch computing farm), while conforming to modern computing paradigms. The aim is to ease system administration at all levels (from hardware to applications) and to provide a smooth end-user experience. Within the INDIGO- DataCloud project, we adopt two different approaches to implement a PaaS-level, on-demand Batch Farm Service based on HTCondor and Mesos. In the first approach, described in this paper, the various HTCondor daemons are packaged inside pre-configured Docker images and deployed as Long Running Services through Marathon, profiting from its health checks and failover capabilities. In the second approach, we are going to implement an ad-hoc HTCondor framework for Mesos. Container-to-container communication and isolation have been addressed exploring a solution based on overlay networks (based on the Calico Project). Finally, we have studied the possibility to deploy an HTCondor cluster that spans over different sites, exploiting the Condor Connection Broker component, that allows communication across a private network boundary or firewall as in case of multi-site deployments. In this paper, we are going to describe and motivate our implementation choices and to show the results of the first tests performed.
Structural Weight Estimation for Launch Vehicles
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd
2002-01-01
This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.
Lesson on Demand. Lesson Plan.
ERIC Educational Resources Information Center
Weaver, Sue
This lesson plan helps students understand the role consumer demand plays in the market system, i.e., how interactions in the marketplace help determine pricing. Students will participate in an activity that demonstrates the concepts of demand, demand schedule, demand curve, and the law of demand. The lesson plan provides student objectives;…
Apollo LM guidance computer software for the final lunar descent.
NASA Technical Reports Server (NTRS)
Eyles, D.
1973-01-01
In all manned lunar landings to date, the lunar module Commander has taken partial manual control of the spacecraft during the final stage of the descent, below roughly 500 ft altitude. This report describes programs developed at the Charles Stark Draper Laboratory, MIT, for use in the LM's guidance computer during the final descent. At this time computational demands on the on-board computer are at a maximum, and particularly close interaction with the crew is necessary. The emphasis is on the design of the computer software rather than on justification of the particular guidance algorithms employed. After the computer and the mission have been introduced, the current configuration of the final landing programs and an advanced version developed experimentally by the author are described.
Schwalenberg, Simon
2005-06-01
The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.
Virtual reality neurosurgery: a simulator blueprint.
Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J
2004-04-01
This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.
Strategic directions of computing at Fermilab
NASA Astrophysics Data System (ADS)
Wolbers, Stephen
1998-05-01
Fermilab computing has changed a great deal over the years, driven by the demands of the Fermilab experimental community to record and analyze larger and larger datasets, by the desire to take advantage of advances in computing hardware and software, and by the advances coming from the R&D efforts of the Fermilab Computing Division. The strategic directions of Fermilab Computing continue to be driven by the needs of the experimental program. The current fixed-target run will produce over 100 TBytes of raw data and systems must be in place to allow the timely analysis of the data. The collider run II, beginning in 1999, is projected to produce of order 1 PByte of data per year. There will be a major change in methodology and software language as the experiments move away from FORTRAN and into object-oriented languages. Increased use of automation and the reduction of operator-assisted tape mounts will be required to meet the needs of the large experiments and large data sets. Work will continue on higher-rate data acquisition systems for future experiments and projects. R&D projects will be pursued as necessary to provide software, tools, or systems which cannot be purchased or acquired elsewhere. A closer working relation with other high energy laboratories will be pursued to reduce duplication of effort and to allow effective collaboration on many aspects of HEP computing.
Teuchmann, K; Totterdell, P; Parker, S K
1999-01-01
Experience sampling methodology was used to examine how work demands translate into acute changes in affective response and thence into chronic response. Seven accountants reported their reactions 3 times a day for 4 weeks on pocket computers. Aggregated analysis showed that mood and emotional exhaustion fluctuated in parallel with time pressure over time. Disaggregated time-series analysis confirmed the direct impact of high-demand periods on the perception of control, time pressure, and mood and the indirect impact on emotional exhaustion. A curvilinear relationship between time pressure and emotional exhaustion was shown. The relationships between work demands and emotional exhaustion changed between high-demand periods and normal working periods. The results suggest that enhancing perceived control may alleviate the negative effects of time pressure.
A Virtual Science Data Environment for Carbon Dioxide Observations
NASA Astrophysics Data System (ADS)
Verma, R.; Goodale, C. E.; Hart, A. F.; Law, E.; Crichton, D. J.; Mattmann, C. A.; Gunson, M. R.; Braverman, A. J.; Nguyen, H. M.; Eldering, A.; Castano, R.; Osterman, G. B.
2011-12-01
Climate science data are often distributed cross-institutionally and made available using heterogeneous interfaces. With respect to observational carbon-dioxide (CO2) records, these data span across national as well as international institutions and are typically distributed using a variety of data standards. Such an arrangement can yield challenges from a research perspective, as users often need to independently aggregate datasets as well as address the issue of data quality. To tackle this dispersion and heterogeneity of data, we have developed the CO2 Virtual Science Data Environment - a comprehensive approach to virtually integrating CO2 data and metadata from multiple missions and providing a suite of computational services that facilitate analysis, comparison, and transformation of that data. The Virtual Science Environment provides climate scientists with a unified web-based destination for discovering relevant observational data in context, and supports a growing range of online tools and services for analyzing and transforming the available data to suit individual research needs. It includes web-based tools to geographically and interactively search for CO2 observations collected from multiple airborne, space, as well as terrestrial platforms. Moreover, the data analysis services it provides over the Internet, including offering techniques such as bias estimation and spatial re-gridding, move computation closer to the data and reduce the complexity of performing these operations repeatedly and at scale. The key to enabling these services, as well as consolidating the disparate data into a unified resource, has been to focus on leveraging metadata descriptors as the foundation of our data environment. This metadata-centric architecture, which leverages the Dublin Core standard, forgoes the need to replicate remote datasets locally. Instead, the system relies upon an extensive, metadata-rich virtual data catalog allowing on-demand browsing and retrieval of CO2 records from multiple missions. In other words, key metadata information about remote CO2 records is stored locally while the data itself is preserved at its respective archive of origin. This strategy has been made possible by our method of encapsulating the heterogeneous sources of data using a common set of web-based services, including services provided by Jet Propulsion Laboratory's Climate Data Exchange (CDX). Furthermore, this strategy has enabled us to scale across missions, and to provide access to a broad array of CO2 observational data. Coupled with on-demand computational services and an intuitive web-portal interface, the CO2 Virtual Science Data Environment effectively transforms heterogeneous CO2 records from multiple sources into a unified resource for scientific discovery.
Using the PhysX engine for physics-based virtual surgery with force feedback.
Maciel, Anderson; Halic, Tansel; Lu, Zhonghua; Nedel, Luciana P; De, Suvranu
2009-09-01
The development of modern surgical simulators is highly challenging, as they must support complex simulation environments. The demand for higher realism in such simulators has driven researchers to adopt physics-based models, which are computationally very demanding. This poses a major problem, since real-time interactions must permit graphical updates of 30 Hz and a much higher rate of 1 kHz for force feedback (haptics). Recently several physics engines have been developed which offer multi-physics simulation capabilities, including rigid and deformable bodies, cloth and fluids. While such physics engines provide unique opportunities for the development of surgical simulators, their higher latencies, compared to what is necessary for real-time graphics and haptics, offer significant barriers to their use in interactive simulation environments. In this work, we propose solutions to this problem and demonstrate how a multimodal surgical simulation environment may be developed based on NVIDIA's PhysX physics library. Hence, models that are undergoing relatively low-frequency updates in PhysX can exist in an environment that demands much higher frequency updates for haptics. We use a collision handling layer to interface between the physical response provided by PhysX and the haptic rendering device to provide both real-time tissue response and force feedback. Our simulator integrates a bimanual haptic interface for force feedback and per-pixel shaders for graphics realism in real time. To demonstrate the effectiveness of our approach, we present the simulation of the laparoscopic adjustable gastric banding (LAGB) procedure as a case study. To develop complex and realistic surgical trainers with realistic organ geometries and tissue properties demands stable physics-based deformation methods, which are not always compatible with the interaction level required for such trainers. We have shown that combining different modelling strategies for behaviour, collision and graphics is possible and desirable. Such multimodal environments enable suitable rates to simulate the major steps of the LAGB procedure.
ERIC Educational Resources Information Center
Miller, John; Weil, Gordon
1986-01-01
The interactive feature of computers is used to incorporate a guided inquiry method of learning introductory economics, extending the Computer Assisted Instruction (CAI) method beyond drills. (Author/JDH)
Customer premises services market demand assessment 1980 - 2000. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Gamble, R. B.; Saporta, L.; Heidenrich, G. A.
1983-01-01
Estimates of market demand for domestic civilian telecommunications services for the years 1980 to 2000 are provided. Overall demand, demand or satellite services, demand for satellite delivered Customer Premises Service (CPS), and demand for 30/20 GHz Customer Premises Services are covered. Emphasis is placed on the CPS market and demand is segmented by market, by service, by user class and by geographic region. Prices for competing services are discussed and the distribution of traffic with respect to distance is estimated. A nationwide traffic distribution model for CPS in terms of demand for CPS traffic and earth stations for each of the major SMSAs in the United States are provided.
Dynamics of electricity market correlations
NASA Astrophysics Data System (ADS)
Alvarez-Ramirez, J.; Escarela-Perez, R.; Espinosa-Perez, G.; Urrea, R.
2009-06-01
Electricity market participants rely on demand and price forecasts to decide their bidding strategies, allocate assets, negotiate bilateral contracts, hedge risks, and plan facility investments. However, forecasting is hampered by the non-linear and stochastic nature of price time series. Diverse modeling strategies, from neural networks to traditional transfer functions, have been explored. These approaches are based on the assumption that price series contain correlations that can be exploited for model-based prediction purposes. While many works have been devoted to the demand and price modeling, a limited number of reports on the nature and dynamics of electricity market correlations are available. This paper uses detrended fluctuation analysis to study correlations in the demand and price time series and takes the Australian market as a case study. The results show the existence of correlations in both demand and prices over three orders of magnitude in time ranging from hours to months. However, the Hurst exponent is not constant over time, and its time evolution was computed over a subsample moving window of 250 observations. The computations, also made for two Canadian markets, show that the correlations present important fluctuations over a seasonal one-year cycle. Interestingly, non-linearities (measured in terms of a multifractality index) and reduced price predictability are found for the June-July periods, while the converse behavior is displayed during the December-January period. In terms of forecasting models, our results suggest that non-linear recursive models should be considered for accurate day-ahead price estimation. On the other hand, linear models seem to suffice for demand forecasting purposes.
UMAMI: A Recipe for Generating Meaningful Metrics through Holistic I/O Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lockwood, Glenn K.; Yoo, Wucherl; Byna, Suren
I/O efficiency is essential to productivity in scientific computing, especially as many scientific domains become more data-intensive. Many characterization tools have been used to elucidate specific aspects of parallel I/O performance, but analyzing components of complex I/O subsystems in isolation fails to provide insight into critical questions: how do the I/O components interact, what are reasonable expectations for application performance, and what are the underlying causes of I/O performance problems? To address these questions while capitalizing on existing component-level characterization tools, we propose an approach that combines on-demand, modular synthesis of I/O characterization data into a unified monitoring and metricsmore » interface (UMAMI) to provide a normalized, holistic view of I/O behavior. We evaluate the feasibility of this approach by applying it to a month-long benchmarking study on two distinct largescale computing platforms. We present three case studies that highlight the importance of analyzing application I/O performance in context with both contemporaneous and historical component metrics, and we provide new insights into the factors affecting I/O performance. By demonstrating the generality of our approach, we lay the groundwork for a production-grade framework for holistic I/O analysis.« less