Scalable Management of Enterprise and Data-Center Networks
2011-09-01
To the best of our knowledge , there is no systematic and efficient solution for handling overlapping wildcard rules in network-wide flow- management ...and D. Maltz, “Unraveling the complexity of network management ,” in NSDI, 2009. [4] P. Mahadevan, P. Sharma, S. Banerjee, and P. Ranganathan , “A...Scalable Management of Enterprise and Data-Center Networks Minlan Yu A Dissertation Presented to the Faculty of Princeton University in Candidacy for
Delay-based virtual congestion control in multi-tenant datacenters
NASA Astrophysics Data System (ADS)
Liu, Yuxin; Zhu, Danhong; Zhang, Dong
2018-03-01
With the evolution of cloud computing and virtualization, the congestion control of virtual datacenters has become the basic issue for multi-tenant datacenters transmission. Regarding to the friendly conflict of heterogeneous congestion control among multi-tenant, this paper proposes a delay-based virtual congestion control, which translates the multi-tenant heterogeneous congestion control into delay-based feedback uniformly by setting the hypervisor translation layer, modifying three-way handshake of explicit feedback and packet loss feedback and throttling receive window. The simulation results show that the delay-based virtual congestion control can effectively solve the unfairness of heterogeneous feedback congestion control algorithms.
Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale
Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason
2017-01-01
With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft’s FPGA deployment in its Bing search engine and Intel’s 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems—like Apache Spark and Hadoop—to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster. PMID:28317049
Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale.
Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason
2016-10-01
With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft's FPGA deployment in its Bing search engine and Intel's 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems-like Apache Spark and Hadoop-to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster.
Data Center Energy Practitioner (DCEP) Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Traber, Kim; Salim, Munther; Sartor, Dale A.
2016-02-02
The main objective for the DCEP program is to raise the standards of those involved in energy assessments of data centers to accelerate energy savings. The program is driven by the fact that significant knowledge, training, and skills are required to perform accurate energy assessments. The program will raise the confidence level in energy assessments in data centers. For those who pass the exam, the program will recognize them as Data Center Energy Practitioners (DCEPs) by issuing a certificate. Hardware req.: PC, MAC; Software Req.: Windows; Related/Auxiliary software--MS Office; Type of files: executable modules, user guide; Documentation: e-user manual; Documentation:more » http://www.1.eere.energy.gov/industry/datacenters/ 12/10/15-New Documentation URL: https://datacenters.lbl.gov/dcep« less
NASA Astrophysics Data System (ADS)
Deng, Peng; Kavehrad, Mohsen; Lou, Yan
2017-01-01
Flexible wireless datacenter networks based on free space optical communication (FSO) links are being considered as promising solutions to meet the future datacenter demands of high throughput, robustness to dynamic traffic patterns, cabling complexity and energy efficiency. Robust and precise steerable FSO links over dynamic traffic play a key role in the reconfigurable optical wireless datacenter inter-rack network. In this work, we propose and demonstrate a reconfigurable 10Gbps FSO system incorporated with smart beam acquisition and tracking mechanism based on gimballess two-axis MEMS micro-mirror and retro-reflective film marked aperture. The fast MEMS-based beam acquisition switches laser beam of FSO terminal from one rack to the next for reconfigurable networks, and the precise beam tracking makes FSO device auto-correct the misalignment in real-time. We evaluate the optical power loss and bit error rate performance of steerable FSO links at various directions. Experimental results suggest that the MEMS based beam steerable FSO links hold considerable promise for the future reconfigurable wireless datacenter networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bell, Geoffrey C.; Federspiel, Clifford
Control software and wireless sensors designed for closed-loop, monitoring and control of IT equipment's inlet air temperatures in datacenters were evaluated and tested while other datacenter cooling best practices were implemented. The controls software and hardware along with each best practice were installed sequentially and evaluated using a measurement and verification procedure between each measure. The results show that the overall project eliminates 475,239 kWh per year, which is 21.3percent of the baseline energy consumption of the data center. The total project, including the best practices will save $42,772 per year and cost $134,057 yielding a simple payback of 3.1more » years. However, the control system alone eliminates 59.6percent of the baseline energy used to move air in the datacenter and 13.6percent of the baseline cooling energy, which is 15.2percent of the baseline energy consumption (see Project Approach, Task 1, below, for additional information) while keeping temperatures substantially within the limits recommended by ASHRAE. Savings attributed to the control system are $30,564 per year with a cost $56,824 for a simple payback of 1.9 years.« less
DEEP-SaM - Energy-Efficient Provisioning Policies for Computing Environments
NASA Astrophysics Data System (ADS)
Bodenstein, Christian; Püschel, Tim; Hedwig, Markus; Neumann, Dirk
The cost of electricity for datacenters is a substantial operational cost that can and should be managed, not only for saving energy, but also due to the ecologic commitment inherent to power consumption. Often, pursuing this goal results in chronic underutilization of resources, a luxury most resource providers do not have in light of their corporate commitments. This work proposes, formalizes and numerically evaluates DEEP-Sam, for clearing provisioning markets, based on the maximization of welfare, subject to utility-level dependant energy costs and customer satisfaction levels. We focus specifically on linear power models, and the implications of the inherent fixed costs related to energy consumption of modern datacenters and cloud environments. We rigorously test the model by running multiple simulation scenarios and evaluate the results critically. We conclude with positive results and implications for long-term sustainable management of modern datacenters.
Szyrkowiec, Thomas; Autenrieth, Achim; Gunning, Paul; Wright, Paul; Lord, Andrew; Elbers, Jörg-Peter; Lumb, Alan
2014-02-10
For the first time, we demonstrate the orchestration of elastic datacenter and inter-datacenter transport network resources using a combination of OpenStack and OpenFlow. Programmatic control allows a datacenter operator to dynamically request optical lightpaths from a transport network operator to accommodate rapid changes of inter-datacenter workflows.
Systems and technologies for high-speed inter-office/datacenter interface
NASA Astrophysics Data System (ADS)
Sone, Y.; Nishizawa, H.; Yamamoto, S.; Fukutoku, M.; Yoshimatsu, T.
2017-01-01
Emerging requirements for inter-office/inter-datacenter short reach links for data center interconnects (DCI) and metro transport networks have led to various inter-office and inter-datacenter optical interface technologies. These technologies are bringing significant changes to systems and network architectures. In this paper, we present a system and ZR optical interface technologies for DCI and metro transport networks, then introduce the latest challenges facing the system framework. There are two trends in reach extension; one is to use Ethernet and the other is to use digital coherent technologies. The first approach achieves reach extension while using as many existing Ethernet components as possible. It offers low costs as reuses the cost-effective components created for the large Ethernet market. The second approach adopts low-cost and low power coherent DSPs that implement the minimal set long haul transmission functions. This paper introduces an architecture that integrates both trends. The architecture satisfies both datacom and telecom needs with a common control and management interface and automated configuration.
Virtualized Networks and Virtualized Optical Line Terminal (vOLT)
NASA Astrophysics Data System (ADS)
Ma, Jonathan; Israel, Stephen
2017-03-01
The success of the Internet and the proliferation of the Internet of Things (IoT) devices is forcing telecommunications carriers to re-architecture a central office as a datacenter (CORD) so as to bring the datacenter economics and cloud agility to a central office (CO). The Open Network Operating System (ONOS) is the first open-source software-defined network (SDN) operating system which is capable of managing and controlling network, computing, and storage resources to support CORD infrastructure and network virtualization. The virtualized Optical Line Termination (vOLT) is one of the key components in such virtualized networks.
On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers
NASA Astrophysics Data System (ADS)
Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.
2017-10-01
This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.
Sadot, Dan; Dorman, G; Gorshtein, Albert; Sonkin, Eduard; Vidal, Or
2015-01-26
112Gbit/sec DSP-based single channel transmission of PAM4 at 56Gbaud over 15GHz of effective analog bandwidth is experimentally demonstrated. The DSP enables use of mature 25G optoelectronics for 2-10km datacenter intra-connections, and 8Tbit/sec over 80km interconnections between data centers.
Survey on Security Issues in Cloud Computing and Associated Mitigation Techniques
NASA Astrophysics Data System (ADS)
Bhadauria, Rohit; Sanyal, Sugata
2012-06-01
Cloud Computing holds the potential to eliminate the requirements for setting up of high-cost computing infrastructure for IT-based solutions and services that the industry uses. It promises to provide a flexible IT architecture, accessible through internet for lightweight portable devices. This would allow multi-fold increase in the capacity or capabilities of the existing and new software. In a cloud computing environment, the entire data reside over a set of networked resources, enabling the data to be accessed through virtual machines. Since these data-centers may lie in any corner of the world beyond the reach and control of users, there are multifarious security and privacy challenges that need to be understood and taken care of. Also, one can never deny the possibility of a server breakdown that has been witnessed, rather quite often in the recent times. There are various issues that need to be dealt with respect to security and privacy in a cloud computing scenario. This extensive survey paper aims to elaborate and analyze the numerous unresolved issues threatening the cloud computing adoption and diffusion affecting the various stake-holders linked to it.
Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic
Sanduja, S; Jewell, P; Aron, E; Pharai, N
2015-01-01
Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic. PMID:26451333
Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic.
Sanduja, S; Jewell, P; Aron, E; Pharai, N
2015-09-01
Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic.
Application of green IT for physics data processing at INCDTIM
NASA Astrophysics Data System (ADS)
Farcas, Felix; Trusca, Radu; Albert, Stefan; Szabo, Izabella; Popeneciu, Gabriel
2012-02-01
Green IT is the next generation technology used in all datacenter around the world. Its benefit is of economic and financial interest. The new technologies are energy efficient, reduce cost and avoid potential disruptions to the existing infrastructure. The most important problem appears at the cooling systems which are the most important in the functionality of a datacenter. Green IT used in Grid Network will benefit the environment and is the next phase in computer infrastructure that will fundamentally change the way we think about and use computing power. At the National Institute for Research and Development of Isotopic and Molecular Technologies Cluj-Napoca (INCDTIM) we have implemented such kind of technology and its support helped us in processing multiple data in different domains, which brought INCDTIM on the major Grid domain with the RO-14-ITIM Grid site. In this paper we present benefits that the new technology brought us and the result obtained in the last year after the implementation of the new green technology.
Thermal feature extraction of servers in a datacenter using thermal image registration
NASA Astrophysics Data System (ADS)
Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan
2017-09-01
Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.
Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.
Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav
2012-01-01
Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.
Conceptual Challenges in Coordinating Theoretical and Data-Centered Estimates of Probability
ERIC Educational Resources Information Center
Konold, Cliff; Madden, Sandra; Pollatsek, Alexander; Pfannkuch, Maxine; Wild, Chris; Ziedins, Ilze; Finzer, William; Horton, Nicholas J.; Kazak, Sibel
2011-01-01
A core component of informal statistical inference is the recognition that judgments based on sample data are inherently uncertain. This implies that instruction aimed at developing informal inference needs to foster basic probabilistic reasoning. In this article, we analyze and critique the now-common practice of introducing students to both…
ERIC Educational Resources Information Center
Mercurius, Neil
2005-01-01
Data-driven decision-making (D3M) appears to be the new buzz phrase for this century, the information age. On the education front, teachers and administrators are engaging in data-centered dialog in grade-level meetings, lounges, hallways, and classrooms as they brainstorm toward closing the gap in student achievement. Clearly, such discussion…
ERIC Educational Resources Information Center
Round, Jennifer E.; Campbell, A. Malcolm
2013-01-01
The ability to interpret experimental data is essential to understanding and participating in the process of scientific discovery. Reading primary research articles can be a frustrating experience for undergraduate biology students because they have very little experience interpreting data. To enhance their data interpretation skills, students…
The Congenital Heart Surgeons Society Datacenter: unique attributes as a research organization.
Caldarone, Christopher A; Williams, William G
2010-01-01
Over the last 25 years, the Congenital Heart Surgeons Society (CHSS) has evolved from an informal club to a mature organization. A central feature of the CHSS has been dedication to evaluating outcomes of congenital heart surgery across a wide array of clinical diagnoses. These research activities have been orchestrated through the CHSS Datacenter, which has developed a unique organizational structure that has strengths and weaknesses in comparison to other research organizational structures (e.g., prospective randomized trials, registries, etc). This review will highlight the unique attributes of the CHSS Datacenter with emphasis on the Datacenter's strengths and weaknesses in comparison to other organizational structures. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Now and Next-Generation Sequencing Techniques: Future of Sequence Analysis Using Cloud Computing
Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav
2012-01-01
Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed “cloud computing”) has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows. PMID:23248640
Measuring and Evaluating TCP Splitting for Cloud Services
NASA Astrophysics Data System (ADS)
Pathak, Abhinav; Wang, Y. Angela; Huang, Cheng; Greenberg, Albert; Hu, Y. Charlie; Kern, Randy; Li, Jin; Ross, Keith W.
In this paper, we examine the benefits of split-TCP proxies, deployed in an operational world-wide network, for accelerating cloud services. We consider a fraction of a network consisting of a large number of satellite datacenters, which host split-TCP proxies, and a smaller number of mega datacenters, which ultimately perform computation or provide storage. Using web search as an exemplary case study, our detailed measurements reveal that a vanilla TCP splitting solution deployed at the satellite DCs reduces the 95 th percentile of latency by as much as 43% when compared to serving queries directly from the mega DCs. Through careful dissection of the measurement results, we characterize how individual components, including proxy stacks, network protocols, packet losses and network load, can impact the latency. Finally, we shed light on further optimizations that can fully realize the potential of the TCP splitting solution.
Scientific Training in the Era of Big Data: A New Pedagogy for Graduate Education.
Aikat, Jay; Carsey, Thomas M; Fecho, Karamarie; Jeffay, Kevin; Krishnamurthy, Ashok; Mucha, Peter J; Rajasekar, Arcot; Ahalt, Stanley C
2017-03-01
The era of "big data" has radically altered the way scientific research is conducted and new knowledge is discovered. Indeed, the scientific method is rapidly being complemented and even replaced in some fields by data-driven approaches to knowledge discovery. This paradigm shift is sometimes referred to as the "fourth paradigm" of data-intensive and data-enabled scientific discovery. Interdisciplinary research with a hard emphasis on translational outcomes is becoming the norm in all large-scale scientific endeavors. Yet, graduate education remains largely focused on individual achievement within a single scientific domain, with little training in team-based, interdisciplinary data-oriented approaches designed to translate scientific data into new solutions to today's critical challenges. In this article, we propose a new pedagogy for graduate education: data-centered learning for the domain-data scientist. Our approach is based on four tenets: (1) Graduate training must incorporate interdisciplinary training that couples the domain sciences with data science. (2) Graduate training must prepare students for work in data-enabled research teams. (3) Graduate training must include education in teaming and leadership skills for the data scientist. (4) Graduate training must provide experiential training through academic/industry practicums and internships. We emphasize that this approach is distinct from today's graduate training, which offers training in either data science or a domain science (e.g., biology, sociology, political science, economics, and medicine), but does not integrate the two within a single curriculum designed to prepare the next generation of domain-data scientists. We are in the process of implementing the proposed pedagogy through the development of a new graduate curriculum based on the above four tenets, and we describe herein our strategy, progress, and lessons learned. While our pedagogy was developed in the context of graduate education, the general approach of data-centered learning can and should be applied to students and professionals at any stage of their education, including at the K-12, undergraduate, graduate, and professional levels. We believe that the time is right to embed data-centered learning within our educational system and, thus, generate the talent required to fully harness the potential of big data.
NASA Astrophysics Data System (ADS)
Beckett, Douglas J. S.; Hickey, Ryan; Logan, Dylan F.; Knights, Andrew P.; Chen, Rong; Cao, Bin; Wheeldon, Jeffery F.
2018-02-01
Quantum dot comb sources integrated with silicon photonic ring-resonator filters and modulators enable the realization of optical sub-components and modules for both inter- and intra-data-center applications. Low-noise, multi-wavelength, single-chip, laser sources, PAM4 modulation and direct detection allow a practical, scalable, architecture for applications beyond 400 Gb/s. Multi-wavelength, single-chip light sources are essential for reducing power dissipation, space and cost, while silicon photonic ring resonators offer high-performance with space and power efficiency.
Shen, Yiwen; Hattink, Maarten H N; Samadi, Payman; Cheng, Qixiang; Hu, Ziyiz; Gazman, Alexander; Bergman, Keren
2018-04-16
Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. We present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly network testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 µs control plane latency for data-center and high performance computing platforms.
Integrating Network Management for Cloud Computing Services
2015-06-01
abstraction and system design. In this dissertation, we make three major contributions. We rst propose to consolidate the tra c and infrastructure management...abstraction and system design. In this dissertation, we make three major contributions. We first propose to consolidate the traffic and infrastructure ...1.3.1 Safe Datacenter Traffic/ Infrastructure Management . . . . . . 9 1.3.2 End-host/Network Cooperative Traffic Management . . . . . . 10 1.3.3 Direct
Towards energy aware optical networks and interconnects
NASA Astrophysics Data System (ADS)
Glesk, Ivan; Osadola, Tolulope; Idris, Siti
2013-10-01
In a today's world, information technology has been identified as one of the major factors driving economic prosperity. Datacenters businesses have been growing significantly in the past few years. The equipments in these datacenters need to be efficiently connected to each other and also to the outside world in order to enable effective exchange of information. This is why there is need for highly scalable, energy savvy and reliable network connectivity infrastructure that is capable of accommodating the large volume of data being exchanged at any time within the datacenter network and the outside network in general. These devices that can ensure such effective connectivity currently require large amount of energy in order to meet up with these increasing demands. In this paper, an overview of works being done towards realizing energy aware optical networks and interconnects for datacenters is presented. Also an OCDMA approach is discussed as potential multiple access technique for future optical network interconnections. We also presented some challenges that might inhibit effective implementation of the OCDMA multiplexing scheme.
MarFS, a Near-POSIX Interface to Cloud Objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inman, Jeffrey Thornton; Vining, William Flynn; Ransom, Garrett Wilson
The engineering forces driving development of “cloud” storage have produced resilient, cost-effective storage systems that can scale to 100s of petabytes, with good parallel access and bandwidth. These features would make a good match for the vast storage needs of High-Performance Computing datacenters, but cloud storage gains some of its capability from its use of HTTP-style Representational State Transfer (REST) semantics, whereas most large datacenters have legacy applications that rely on POSIX file-system semantics. MarFS is an open-source project at Los Alamos National Laboratory that allows us to present cloud-style object-storage as a scalable near-POSIX file system. We have alsomore » developed a new storage architecture to improve bandwidth and scalability beyond what’s available in commodity object stores, while retaining their resilience and economy. Additionally, we present a scheme for scaling the POSIX interface to allow billions of files in a single directory and trillions of files in total.« less
MarFS, a Near-POSIX Interface to Cloud Objects
Inman, Jeffrey Thornton; Vining, William Flynn; Ransom, Garrett Wilson; ...
2017-01-01
The engineering forces driving development of “cloud” storage have produced resilient, cost-effective storage systems that can scale to 100s of petabytes, with good parallel access and bandwidth. These features would make a good match for the vast storage needs of High-Performance Computing datacenters, but cloud storage gains some of its capability from its use of HTTP-style Representational State Transfer (REST) semantics, whereas most large datacenters have legacy applications that rely on POSIX file-system semantics. MarFS is an open-source project at Los Alamos National Laboratory that allows us to present cloud-style object-storage as a scalable near-POSIX file system. We have alsomore » developed a new storage architecture to improve bandwidth and scalability beyond what’s available in commodity object stores, while retaining their resilience and economy. Additionally, we present a scheme for scaling the POSIX interface to allow billions of files in a single directory and trillions of files in total.« less
Towards a carrier SDN: an example for elastic inter-datacenter connectivity.
Velasco, L; Asensio, A; Berral, J L; Castro, A; López, V
2014-01-13
We propose a network-driven transfer mode for cloud operations in a step towards a carrier SDN. Inter-datacenter connectivity is requested in terms of volume of data and completion time. The SDN controller translates and forwards requests to an ABNO controller in charge of a flexgrid network.
Guidelines for Datacenter Energy Information System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Reshma; Mahdavi, Rod; Mathew, Paul
2013-12-01
The purpose of this document is to provide structured guidance to data center owners, operators, and designers, to empower them with information on how to specify and procure data center energy information systems (EIS) for managing the energy utilization of their data centers. Data centers are typically energy-intensive facilities that can consume up to 100 times more energy per unit area than a standard office building (FEMP 2013). This guidance facilitates “data-driven decision making,” which will be enabled by following the approach outlined in the guide. This will bring speed, clarity, and objectivity to any energy or asset management decisionsmore » because of the ability to monitor and track an energy management project’s performance.« less
BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark.
Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung
2016-05-01
Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today's data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG's simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-27
... the move of this datacenter to an ISE facility, and to adopt a corresponding disaster recovery network... rent cabinet space in the ISE facility, and to adopt a corresponding disaster recovery network fee... datacenter is not operational. As the Exchange is moving its hardware to an ISE-run facility, the Exchange...
Optical datacenter network employing slotted (TDMA) operation for dynamic resource allocation
NASA Astrophysics Data System (ADS)
Bakopoulos, P.; Tokas, K.; Spatharakis, C.; Patronas, I.; Landi, G.; Christodoulopoulos, K.; Capitani, M.; Kyriakos, A.; Aziz, M.; Reisis, D.; Varvarigos, E.; Zahavi, E.; Avramopoulos, H.
2018-02-01
The soaring traffic demands in datacenter networks (DCNs) are outpacing progresses in CMOS technology, challenging the bandwidth and energy scalability of currently established technologies. Optical switching is gaining traction as a promising path for sustaining the explosive growth of DCNs; however, its practical deployment necessitates extensive modifications to the network architecture and operation, tailored to the technological particularities of optical switches (i.e. no buffering, limitations in radix size and speed). European project NEPHELE is developing an optical network infrastructure that leverages optical switching within a software-defined networking (SDN) framework to overcome the bandwidth and energy scaling challenges of datacenter networks. An experimental validation of the NEPHELE data plane is reported based on commercial off-the-shelf optical components controlled by FPGA boards. To facilitate dynamic allocation of the network resources and perform collision-free routing in a lossless network environment, slotted operation is employed (i.e. using time-division multiple-access - TDMA). Error-free operation of the NEPHELE data plane is verified for 200 μs slots in various scenarios that involve communication between Ethernet hosts connected to custom-designed top-of-rack (ToR) switches, located in the same or in different datacenter pods. Control of the slotted data plane is obtained through an SDN framework comprising an OpenDaylight controller with appropriate add-ons. Communication between servers in the optical-ToR is demonstrated with various routing scenarios, concerning communication between hosts located in the same rack or in different racks, within the same or different datacenter pods. Error-free operation is confirmed for all evaluated scenarios, underpinning the feasibility of the NEPHELE architecture.
Shen, Yiwen; Hattink, Maarten; Samadi, Payman; ...
2018-04-13
Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. Here, we present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly networkmore » testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 microsecond control plane latency for data-center and high performance computing platforms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Yiwen; Hattink, Maarten; Samadi, Payman
Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. Here, we present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly networkmore » testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 microsecond control plane latency for data-center and high performance computing platforms.« less
Saldaña Barrios, Juan Jose; Mendoza, Luis; Pitti, Edgardo; Vargas, Miguel
2016-10-21
In this work, the authors present two eHealth platforms that are examples of how health systems are migrating from client-server architecture to the web-based and ubiquitous paradigm. These two platforms were modeled, designed, developed and implemented with positive results. First, using ambient-assisted living and ubiquitous computing, the authors enhance how palliative care is being provided to the elderly patients and patients with terminal illness, making the work of doctors, nurses and other health actors easier. Second, applying machine learning methods and a data-centered, ubiquitous, patient's results' repository, the authors intent to improve the Down's syndrome risk estimation process with more accurate predictions based on local woman patients' parameters. These two eHealth platforms can improve the quality of life, not only physically but also psychologically, of the patients and their families in the country of Panama. © The Author(s) 2016.
The LASP Interactive Solar IRradiance Datacenter (LISIRD)
NASA Astrophysics Data System (ADS)
Pankratz, C. K.; Lindholm, D. M.; Snow, M.; Knapp, B.; Woodraska, D.; Templeman, B.; Woods, T. N.; Eparvier, F. G.; Fontenla, J.; Harder, J.; McClintock, W. E.
2007-12-01
The Laboratory for Atmospheric and Space Physics (LASP) has been making space-based measurements of solar irradiance for many decades, and thus has established an extensive catalog of past and ongoing space- based solar irradiance measurements. In order to maximize the accessibility and usability of solar irradiance data and information from multiple missions, LASP is developing the LASP Interactive Solar IRradiance Datacenter (LISIRD) to better serve the needs of researchers, educators, and the general public. This data center is providing interactive and direct access to a comprehensive set of solar spectral irradiance measurements from the soft X-ray (XUV) at 0.1 nm up to the near infrared (NIR) at 2400 nm, as well as state-of-the-art measurements of Total Solar Irradiance (TSI). LASP researchers are also responsible for an extensive set of solar irradiance models and historical solar irradiance reconstructions, which will also be accessible via this data center over time. LISIRD currently provides access to solar irradiance data sets from the SORCE, TIMED-SEE, UARS-SOLSTICE, and SME instruments, spanning 1981 to the present, as well as a Lyman Alpha composite that is available from 1947 to the present. LISIRD also provides data products of interest to the space weather community, whose needs demand high time cadence and near real-time data delivery. This poster provides an overview of the LISIRD system, summarizes the data sets currently available, describes future plans and capabilities, and provides details on how to access solar irradiance data through LISIRD's various interfaces.
BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark
Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung
2016-01-01
Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today’s data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG’s simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact. PMID:27390389
Experience on HTCondor batch system for HEP and other research fields at KISTI-GSDC
NASA Astrophysics Data System (ADS)
Ahn, S. U.; Jaikar, A.; Kong, B.; Yeo, I.; Bae, S.; Kim, J.
2017-10-01
Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) located at Daejeon in South Korea is the unique datacenter in the country which helps with its computing resources fundamental research fields dealing with the large-scale of data. For historical reason, it has run Torque batch system while recently it starts running HTCondor for new systems. Having different kinds of batch systems implies inefficiency in terms of resource management and utilization. We conducted a research on resource management with HTCondor for several user scenarios corresponding to the user environments that currently GSDC supports. A recent research on the resource usage patterns at GSDC is considered in this research to build the possible user scenarios. Checkpointing and Super-Collector model of HTCondor give us more efficient and flexible way to manage resources and Grid Gate provided by HTCondor helps to interface with the Grid environment. In this paper, the overview on the essential features of HTCondor exploited in this work is described and the practical examples for HTCondor cluster configuration in our cases are presented.
Physical-layer network coding for passive optical interconnect in datacenter networks.
Lin, Rui; Cheng, Yuxin; Guan, Xun; Tang, Ming; Liu, Deming; Chan, Chun-Kit; Chen, Jiajia
2017-07-24
We introduce physical-layer network coding (PLNC) technique in a passive optical interconnect (POI) architecture for datacenter networks. The implementation of the PLNC in the POI at 2.5 Gb/s and 10Gb/s have been experimentally validated while the gains in terms of network layer performances have been investigated by simulation. The results reveal that in order to realize negligible packet drop, the wavelengths usage can be reduced by half while a significant improvement in packet delay especially under high traffic load can be achieved by employing PLNC over POI.
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Chughtai, Mohsan Niaz
2018-05-01
In this paper the IRIS (Integrated Router Interconnected spectrally), an optical domain architecture for datacenter network is analyzed. The IRIS integrated with advanced modulation formats (M-QAM) and coherent optical receiver is analyzed. The channel impairments are compensated using the DSP algorithms following the coherent receiver. The proposed scheme allows N2 multiplexed wavelengths for N×N size. The performance of the N×N-IRIS switch with and without wavelength conversion is analyzed for different Baud rates over M-QAM modulation formats. The performance of the system is analyzed in terms of bit error rate (BER) vs OSNR curves.
Exploring cloud and big data components for SAR archiving and analysis
NASA Astrophysics Data System (ADS)
Baker, S.; Crosby, C. J.; Meertens, C.; Phillips, D.
2017-12-01
Under the Geodesy Advancing Geoscience and EarthScope (GAGE) NSF Cooperative Agreement, UNAVCO has seen the volume of the SAR Data Archive grow at a substantial rate, from 2 TB in Y1 and 5 TB in Y2 to 41 TB in Y3 primarily due to WInSAR PI proposal management of ALOS-2/JAXA (Japan Aerospace Exploration Agency) data and to a lesser extent Supersites and other data collections. JAXA provides a fixed number of scenes per year for each PI, and some data files are 50-60GB each, which accounts for the large volume of data. In total, over 100TB of SAR data are in the WInSAR/UNAVCO archive and a large portion of these are available unrestricted for WInSAR members. In addition to the existing data, newer data streams from the Sentinel-1 and NISAR missions will require efficient processing pipelines and easily scalable infrastructure to handle processed results. With these growing data sizes and space concerns, the SAR archive operations migrated to the Texas Advanced Computing Center (TACC) via an NSF XSEDE proposal in spring 2017. Data are stored on an HPC system while data operations are running on Jetstream virtual machines within the same datacenter. In addition to the production data operations, testing was done in early 2017 with container based InSAR processing analysis using JupyterHub and Docker images deployed on a VM cluster on Jetstream. The JupyterHub environment is well suited for short courses and other training opportunities for the community such as labs for university courses on InSAR. UNAVCO is also exploring new processing methodologies using DC/OS (the datacenter operating system) for batch and stream processing workflows and time series analysis with Big Data open source components like the Spark, Mesos, Akka, Cassandra, Kafka (SMACK) stack. The comparison of the different methodologies will provide insight into the pros and cons for each and help the SAR community with decisions about infrastructure and software requirements to meet their research goals.
Gokmen, Tayfun; Vlasov, Yurii
2016-01-01
In recent years, deep neural networks (DNN) have demonstrated significant business impact in large scale analysis and classification tasks such as speech recognition, visual object detection, pattern extraction, etc. Training of large DNNs, however, is universally considered as time consuming and computationally intensive task that demands datacenter-scale computational resources recruited for many days. Here we propose a concept of resistive processing unit (RPU) devices that can potentially accelerate DNN training by orders of magnitude while using much less power. The proposed RPU device can store and update the weight values locally thus minimizing data movement during training and allowing to fully exploit the locality and the parallelism of the training algorithm. We evaluate the effect of various RPU device features/non-idealities and system parameters on performance in order to derive the device and system level specifications for implementation of an accelerator chip for DNN training in a realistic CMOS-compatible technology. For large DNNs with about 1 billion weights this massively parallel RPU architecture can achieve acceleration factors of 30, 000 × compared to state-of-the-art microprocessors while providing power efficiency of 84, 000 GigaOps∕s∕W. Problems that currently require days of training on a datacenter-size cluster with thousands of machines can be addressed within hours on a single RPU accelerator. A system consisting of a cluster of RPU accelerators will be able to tackle Big Data problems with trillions of parameters that is impossible to address today like, for example, natural speech recognition and translation between all world languages, real-time analytics on large streams of business and scientific data, integration, and analysis of multimodal sensory data flows from a massive number of IoT (Internet of Things) sensors. PMID:27493624
Gokmen, Tayfun; Vlasov, Yurii
2016-01-01
In recent years, deep neural networks (DNN) have demonstrated significant business impact in large scale analysis and classification tasks such as speech recognition, visual object detection, pattern extraction, etc. Training of large DNNs, however, is universally considered as time consuming and computationally intensive task that demands datacenter-scale computational resources recruited for many days. Here we propose a concept of resistive processing unit (RPU) devices that can potentially accelerate DNN training by orders of magnitude while using much less power. The proposed RPU device can store and update the weight values locally thus minimizing data movement during training and allowing to fully exploit the locality and the parallelism of the training algorithm. We evaluate the effect of various RPU device features/non-idealities and system parameters on performance in order to derive the device and system level specifications for implementation of an accelerator chip for DNN training in a realistic CMOS-compatible technology. For large DNNs with about 1 billion weights this massively parallel RPU architecture can achieve acceleration factors of 30, 000 × compared to state-of-the-art microprocessors while providing power efficiency of 84, 000 GigaOps∕s∕W. Problems that currently require days of training on a datacenter-size cluster with thousands of machines can be addressed within hours on a single RPU accelerator. A system consisting of a cluster of RPU accelerators will be able to tackle Big Data problems with trillions of parameters that is impossible to address today like, for example, natural speech recognition and translation between all world languages, real-time analytics on large streams of business and scientific data, integration, and analysis of multimodal sensory data flows from a massive number of IoT (Internet of Things) sensors.
Demonstration of 720×720 optical fast circuit switch for intra-datacenter networks
NASA Astrophysics Data System (ADS)
Ueda, Koh; Mori, Yojiro; Hasegawa, Hiroshi; Matsuura, Hiroyuki; Ishii, Kiyo; Kuwatsuka, Haruhiko; Namiki, Shu; Sato, Ken-ichi
2016-03-01
Intra-datacenter traffic is growing more than 20% a year. In typical datacenters, many racks/pods including servers are interconnected via multi-tier electrical switches. The electrical switches necessitate power-consuming optical-to- electrical (OE) and electrical-to-optical (EO) conversion, the power consumption of which increases with traffic. To overcome this problem, optical switches that eliminate costly OE and EO conversion and enable low power consumption switching are being investigated. There are two major requirements for the optical switch. First, it must have a high port count to construct reduced tier intra-datacenter networks. Second, switching speed must be short enough that most of the traffic load can be offloaded from electrical switches. Among various optical switches, we focus on those based on arrayed-waveguide gratings (AWGs), since the AWG is a passive device with minimal power consumption. We previously proposed a high-port-count optical switch architecture that utilizes tunable lasers, route-and-combine switches, and wavelength-routing switches comprised of couplers, erbium-doped fiber amplifiers (EDFAs), and AWGs. We employed conventional external cavity lasers whose wavelength-tuning speed was slower than 100 ms. In this paper, we demonstrate a large-scale optical switch that offers fast wavelength routing. We construct a 720×720 optical switch using recently developed lasers whose wavelength-tuning period is below 460 μs. We evaluate the switching time via bit-error-ratio measurements and achieve 470-μs switching time (includes 10-μs guard time to handle EDFA surge). To best of our knowledge, this is the first demonstration of such a large-scale optical switch with practical switching time.
[Information management in multicenter studies: the Brazilian longitudinal study for adult health].
Duncan, Bruce Bartholow; Vigo, Álvaro; Hernandez, Émerson; Luft, Vivian Cristine; Ahlert, Hubert; Bergmann, Kaiser; Mota, Eduardo
2013-06-01
Information management in large multicenter studies requires a specialized approach. The Estudo Longitudinal da Saúde do Adulto (ELSA-Brasil - Brazilian Longitudinal Study for Adult Health) has created a Datacenter to enter and manage its data system. The aim of this paper is to describe the steps involved, including the information entry, transmission and management methods. A web system was developed in order to allow, in a safe and confidential way, online data entry, checking and editing, as well as the incorporation of data collected on paper. Additionally, a Picture Archiving and Communication System was implemented and customized for echocardiography and retinography. It stores the images received from the Investigation Centers and makes them available at the Reading Centers. Finally, data extraction and cleaning processes were developed to create databases in formats that enable analyses in multiple statistical packages.
Commissioning the CERN IT Agile Infrastructure with experiment workloads
NASA Astrophysics Data System (ADS)
Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia
2014-06-01
In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.
Numerical Modeling and Optimization of Warm-water Heat Sinks
NASA Astrophysics Data System (ADS)
Hadad, Yaser; Chiarot, Paul
2015-11-01
For cooling in large data-centers and supercomputers, water is increasingly replacing air as the working fluid in heat sinks. Utilizing water provides unique capabilities; for example: higher heat capacity, Prandtl number, and convection heat transfer coefficient. The use of warm, rather than chilled, water has the potential to provide increased energy efficiency. The geometric and operating parameters of the heat sink govern its performance. Numerical modeling is used to examine the influence of geometry and operating conditions on key metrics such as thermal and flow resistance. This model also facilitates studies on cooling of electronic chip hot spots and failure scenarios. We report on the optimal parameters for a warm-water heat sink to achieve maximum cooling performance.
Scalability analysis methodology for passive optical interconnects in data center networks using PAM
NASA Astrophysics Data System (ADS)
Lin, R.; Szczerba, Krzysztof; Agrell, Erik; Wosinska, Lena; Tang, M.; Liu, D.; Chen, J.
2017-11-01
A framework is developed for modeling the fundamental impairments in optical datacenter interconnects, i.e., the power loss and the receiver noises. This framework makes it possible, to analyze the trade-offs between data rates, modulation order, and number of ports that can be supported in optical interconnect architectures, while guaranteeing that the required signal-to-noise ratios are satisfied. To the best of our knowledge, this important assessment methodology is not yet available. As a case study, the trade-offs are investigated for three coupler-based top-of-rack interconnect architectures, which suffer from serious insertion loss. The results show that using single-port transceivers with 10 GHz bandwidth, avalanche photodiode detectors, and quadratical pulse amplitude modulation, more than 500 ports can be supported.
Rofoee, Bijan Rahimzadeh; Zervas, Georgios; Yan, Yan; Amaya, Norberto; Qin, Yixuan; Simeonidou, Dimitra
2013-03-11
The paper presents a novel network architecture on demand approach using on-chip and-off chip implementations, enabling programmable, highly efficient and transparent networking, well suited for intra-datacenter communications. The implemented FPGA-based adaptable line-card with on-chip design along with an architecture on demand (AoD) based off-chip flexible switching node, deliver single chip dual L2-Packet/L1-time shared optical network (TSON) server Network Interface Cards (NIC) interconnected through transparent AoD based switch. It enables hitless adaptation between Ethernet over wavelength switched network (EoWSON), and TSON based sub-wavelength switching, providing flexible bitrates, while meeting strict bandwidth, QoS requirements. The on and off-chip performance results show high throughput (9.86Ethernet, 8.68Gbps TSON), high QoS, as well as hitless switch-over.
JANUS: A Compilation System for Balancing Parallelism and Performance in OpenVX
NASA Astrophysics Data System (ADS)
Omidian, Hossein; Lemieux, Guy G. F.
2018-04-01
Embedded systems typically do not have enough on-chip memory for entire an image buffer. Programming systems like OpenCV operate on entire image frames at each step, making them use excessive memory bandwidth and power. In contrast, the paradigm used by OpenVX is much more efficient; it uses image tiling, and the compilation system is allowed to analyze and optimize the operation sequence, specified as a compute graph, before doing any pixel processing. In this work, we are building a compilation system for OpenVX that can analyze and optimize the compute graph to take advantage of parallel resources in many-core systems or FPGAs. Using a database of prewritten OpenVX kernels, it automatically adjusts the image tile size as well as using kernel duplication and coalescing to meet a defined area (resource) target, or to meet a specified throughput target. This allows a single compute graph to target implementations with a wide range of performance needs or capabilities, e.g. from handheld to datacenter, that use minimal resources and power to reach the performance target.
100 Gb/s optical discrete multi-tone transceivers for intra- and inter-datacenter networks
NASA Astrophysics Data System (ADS)
Okabe, Ryo; Tanaka, Toshiki; Nishihara, Masato; Kai, Yutaka; Takahara, Tomoo; Liu, Bo; Li, Lei; Tao, Zhenning; Rasmussen, Jens C.
2016-03-01
Discrete multi-tone (DMT) technology is an attractive modulation technology for short-reach application due to its high spectral efficiency and simple configuration. In this paper, we first explain the features of DMT technology then discuss the impact of fiber dispersion and chirp on the frequency responses of the DMT signal and the importance in the relationship between chirp and the optical transmission band. Next, we explain our experiments of 100-Gb/s DMT transmission of 10 km in the O-band using directly modulated lasers for low-cost application. In an inter-datacenter network of more than several tens of kilometers, fiber dispersion mainly limits system performance. We also discuss our experiment of 100-Gb/s DMT transmission up to 100 km in the C-band without a dispersion compensator by using vestigial sideband spectrum shaping and nonlinear compensation.
NASA Astrophysics Data System (ADS)
Gao, Tao; Li, Xin; Guo, Bingli; Yin, Shan; Li, Wenzhe; Huang, Shanguo
2017-07-01
Multipath provisioning is a survivable and resource efficient solution against increasing link failures caused by natural or man-made disasters in elastic optical datacenter networks (EODNs). Nevertheless, the conventional multipath provisioning scheme is designed only for connecting a specific node pair. Also, it is obvious that the number of node-disjoint paths between any two nodes is restricted to network connectivity, which has a fixed value for a given topology. Recently, the concept of content connectivity in EODNs has been proposed, which guarantees that a user can be served by any datacenter hosting the required content regardless of where it is located. From this new perspective, we propose a survivable multipath provisioning with content connectivity (MPCC) scheme, which is expected to improve the spectrum efficiency and the whole system survivability. We formulate the MPCC scheme with Integer Linear Program (ILP) in static traffic scenario and a heuristic approach is proposed for dynamic traffic scenario. Furthermore, to adapt MPCC to the variation of network state in dynamic traffic scenario, we propose a dynamic content placement (DCP) strategy in the MPCC scheme for detecting the variation of the distribution of user requests and adjusting the content location dynamically. Simulation results indicate that the MPCC scheme can reduce over 20% spectrum consumption than conventional multipath provisioning scheme in static traffic scenario. And in dynamic traffic scenario, the MPCC scheme can reduce over 20% spectrum consumption and over 50% blocking probability than conventional multipath provisioning scheme. Meanwhile, benefiting from the DCP strategy, the MPCC scheme has a good adaption to the variation of the distribution of user requests.
Professional social networking.
Rowley, Robert D
2014-12-01
We review the current state of social communication between healthcare professionals, the role of consumer social networking, and some emerging technologies to address the gaps. In particular, the review covers (1) the current state of loose social networking for continuing medical education (CME) and other broadcast information dissemination; (2) social networking for business promotion; (3) social networking for peer collaboration, including simple communication as well as more robust data-centered collaboration around patient care; and (4) engaging patients on social platforms, including integrating consumer-originated data into the mix of healthcare data. We will see how, as the nature of healthcare delivery moves from the institution-centric way of tradition to a more social and networked ambulatory pattern that we see emerging today, the nature of health IT has also moved from enterprise-centric systems to more socially networked, cloud-based options.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleary, Geoff
2014-09-08
This program exercises the robotic elements in Oracle Storage Tek tape libraries. This is useful for two known cases: 1.) shaking out marginal or new hardware by ensuring hardware robustness under high-duty usage. 2.) ensuring tape libraries are visually interesting during datacenter tours
FastLane: An Agile Congestion Signaling Mechanism for Improving Datacenter Performance
2013-05-20
Cloudera, Ericsson, Facebook, General Electric, Hortonworks, Huawei , Intel, Microsoft, NetApp, Oracle, Quanta, Samsung, Splunk, VMware and Yahoo...Web Services, Google, SAP, Blue Goji, Cisco, Clearstory Data, Cloud- era, Ericsson, Facebook, General Electric, Hortonworks, Huawei , Intel, Microsoft
112 Gb/s sub-cycle 16-QAM Nyquist-SCM for intra-datacenter connectivity
NASA Astrophysics Data System (ADS)
Bakopoulos, Paraskevas; Dris, Stefanos; Argyris, Nikolaos; Spatharakis, Christos; Avramopoulos, Hercules
2016-03-01
Datacenter traffic is exploding. Ongoing advancements in network infrastructure that ride on Moore's law are unable to keep up, necessitating the introduction of multiplexing and advanced modulation formats for optical interconnects in order to overcome bandwidth limitations, and scale lane speeds with energy- and cost-efficiency to 100 Gb/s and beyond. While the jury is still out as to how this will be achieved, schemes relying on intensity modulation with direct detection (IM/DD) are regarded as particularly attractive, due to their inherent implementation simplicity. Moreover, the scaling-out of datacenters calls for longer transmission reach exceeding 300 m, requiring single-mode solutions. In this work we advocate using 16-QAM sub-cycle Nyquist-SCM as a simpler alternative to discrete multitone (DMT), but which is still more bandwidth-efficient than PAM-4. The proposed optical interconnect is demonstrated at 112 Gb/s, which, to the best of our knowledge, is the highest rate achieved in a single-polarization implementation of SCM. Off-the-shelf components are used: A DFB laser, a 24.3 GHz electro-absorption modulator (EAM) and a limiting photoreceiver, combined with equalization through digital signal processing (DSP) at the receiver. The EAM is driven by a low-swing (<1 V) arbitrary waveform generator (AWG), which produces a 28 Gbaud 16-QAM electrical signal with carrier frequency at ~15 GHz. Tight spectral shaping is leveraged as a means of maintaining signal fidelity when using low-bandwidth electro-optic components; matched root-raised-cosine transmit and receive filters with 0.1 excess bandwidth are thus employed. Performance is assessed through transmission experiments over 1250 m and 2000 m of SMF.
Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239
Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
The data operation centre tool. Architecture and population strategies
NASA Astrophysics Data System (ADS)
Dal Pra, Stefano; Crescente, Alberto
2012-12-01
Keeping track of the layout of the informatic resources in a big datacenter is a complex task. DOCET is a database-based webtool designed and implemented at INFN. It aims at providing a uniform interface to manage and retrieve needed information about one or more datacenter, such as available hardware, software and their status. Having a suitable application is however useless until most of the information about the centre are not inserted in the DOCET'S database. Manually inserting all the information from scratch is an unfeasible task. After describing DOCET'S high level architecture, its main features and current development track, we present and discuss the work done to populate the DOCET database for the INFN-T1 site by retrieving information from a heterogenous variety of authoritative sources, such as DNS, DHCP, Quattor host profiles, etc. We then describe the work being done to integrate DOCET with some common management operation, such as adding a newly installed host to DHCP and DNS, or creating a suitable Quattor profile template for it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friese, Ryan; Khemka, Bhavesh; Maciejewski, Anthony A
Rising costs of energy consumption and an ongoing effort for increases in computing performance are leading to a significant need for energy-efficient computing. Before systems such as supercomputers, servers, and datacenters can begin operating in an energy-efficient manner, the energy consumption and performance characteristics of the system must be analyzed. In this paper, we provide an analysis framework that will allow a system administrator to investigate the tradeoffs between system energy consumption and utility earned by a system (as a measure of system performance). We model these trade-offs as a bi-objective resource allocation problem. We use a popular multi-objective geneticmore » algorithm to construct Pareto fronts to illustrate how different resource allocations can cause a system to consume significantly different amounts of energy and earn different amounts of utility. We demonstrate our analysis framework using real data collected from online benchmarks, and further provide a method to create larger data sets that exhibit similar heterogeneity characteristics to real data sets. This analysis framework can provide system administrators with insight to make intelligent scheduling decisions based on the energy and utility needs of their systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dinda, Peter August
2015-03-17
This report describes the activities, findings, and products of the Northwestern University component of the "Enabling Exascale Hardware and Software Design through Scalable System Virtualization" project. The purpose of this project has been to extend the state of the art of systems software for high-end computing (HEC) platforms, and to use systems software to better enable the evaluation of potential future HEC platforms, for example exascale platforms. Such platforms, and their systems software, have the goal of providing scientific computation at new scales, thus enabling new research in the physical sciences and engineering. Over time, the innovations in systems softwaremore » for such platforms also become applicable to more widely used computing clusters, data centers, and clouds. This was a five-institution project, centered on the Palacios virtual machine monitor (VMM) systems software, a project begun at Northwestern, and originally developed in a previous collaboration between Northwestern University and the University of New Mexico. In this project, Northwestern (including via our subcontract to the University of Pittsburgh) contributed to the continued development of Palacios, along with other team members. We took the leadership role in (1) continued extension of support for emerging Intel and AMD hardware, (2) integration and performance enhancement of overlay networking, (3) connectivity with architectural simulation, (4) binary translation, and (5) support for modern Non-Uniform Memory Access (NUMA) hosts and guests. We also took a supporting role in support for specialized hardware for I/O virtualization, profiling, configurability, and integration with configuration tools. The efforts we led (1-5) were largely successful and executed as expected, with code and papers resulting from them. The project demonstrated the feasibility of a virtualization layer for HEC computing, similar to such layers for cloud or datacenter computing. For effort (3), although a prototype connecting Palacios with the GEM5 architectural simulator was demonstrated, our conclusion was that such a platform was less useful for design space exploration than anticipated due to inherent complexity of the connection between the instruction set architecture level and the microarchitectural level. For effort (4), we found that a code injection approach proved to be more fruitful. The results of our efforts are publicly available in the open source Palacios codebase and published papers, all of which are available from the project web site, v3vee.org. Palacios is currently one of the two codebases (the other being Sandia’s Kitten lightweight kernel) that underlies the node operating system for the DOE Hobbes Project, one of two projects tasked with building a systems software prototype for the national exascale computing effort.« less
Microscale air quality impacts of distributed power generation facilities.
Olaguer, Eduardo P; Knipping, Eladio; Shaw, Stephanie; Ravindran, Satish
2016-08-01
The electric system is experiencing rapid growth in the adoption of a mix of distributed renewable and fossil fuel sources, along with increasing amounts of off-grid generation. New operational regimes may have unforeseen consequences for air quality. A three-dimensional microscale chemical transport model (CTM) driven by an urban wind model was used to assess gaseous air pollutant and particulate matter (PM) impacts within ~10 km of fossil-fueled distributed power generation (DG) facilities during the early afternoon of a typical summer day in Houston, TX. Three types of DG scenarios were considered in the presence of motor vehicle emissions and a realistic urban canopy: (1) a 25-MW natural gas turbine operating at steady state in either simple cycle or combined heating and power (CHP) mode; (2) a 25-MW simple cycle gas turbine undergoing a cold startup with either moderate or enhanced formaldehyde emissions; and (3) a data center generating 10 MW of emergency power with either diesel or natural gas-fired backup generators (BUGs) without pollution controls. Simulations of criteria pollutants (NO2, CO, O3, PM) and the toxic pollutant, formaldehyde (HCHO), were conducted assuming a 2-hr operational time period. In all cases, NOx titration dominated ozone production near the source. The turbine scenarios did not result in ambient concentration enhancements significantly exceeding 1 ppbv for gaseous pollutants or over 1 µg/m(3) for PM after 2 hr of emission, assuming realistic plume rise. In the case of the datacenter with diesel BUGs, ambient NO2 concentrations were enhanced by 10-50 ppbv within 2 km downwind of the source, while maximum PM impacts in the immediate vicinity of the datacenter were less than 5 µg/m(3). Plausible scenarios of distributed fossil generation consistent with the electricity grid's transformation to a more flexible and modernized system suggest that a substantial amount of deployment would be required to significantly affect air quality on a localized scale. In particular, natural gas turbines typically used in distributed generation may have minor effects. Large banks of diesel backup generators such as those used by data centers, on the other hand, may require pollution controls or conversion to natural gas-fired reciprocal internal combustion engines to decrease nitrogen dioxide pollution.
Coordinating the Design and Management of Heterogeneous Datacenter Resources
ERIC Educational Resources Information Center
Guevara, Marisabel
2014-01-01
Heterogeneous design presents an opportunity to improve energy efficiency but raises a challenge in management. Whereas prior work separates the two, we coordinate heterogeneous design and management. We present a market-based resource allocation mechanism that navigates the performance and power trade-offs of heterogeneous architectures. Given…
Evaluation of Water consumption and savings achieved in Datacenters through Air side Economization
NASA Astrophysics Data System (ADS)
Mishra, Ravi
Recent researches and a few facility owners have focused on eliminating the chiller plant altogether by implementing 'Evaporative Cooling', as an alternative or augmentation to compressor-based air conditioning since the energy consumption is dominated by the compressor work (around 41%) in the chiller plant. Because evaporative cooling systems consume water, when evaluating the energy savings potential of these systems, it is imperative to consider not just their impacts on electricity use, but also their impacts on water consumption as well since Joe Kava, Google's head of data center operations, was quoted as saying that water is the "big elephant in the room" for data center companies. The objective of this study was to calculate the savings achieved in water consumption when these evaporative cooling systems were completely or partially marginalized when the facility is strictly working in the Economizer mode also known as 'free cooling' considering other modes of cooling required only for a part of the time when outside temperature, humidity and pollutant level were unfavorable causing improper functioning and reliability issues. The analysis was done on ASHRAE climatic zones with the help of TMY-3 weather data.
Guide Related Sites submenu NASA Datacenters ADS HEASARC IRSA LAMBDA NED NSSDC MAST Services submenu Archive for Space Telescopes (MAST) is a NASA funded project to support and provide to the astronomical : STScI may provide links to Web pages that are not part of the STScI, AURA, NASA, or ESA domain. These
The New LASP Interactive Solar IRradiance Datacenter (LISIRD)
NASA Astrophysics Data System (ADS)
Baltzer, T.; Wilson, A.; Lindholm, D. M.; Snow, M. A.; Woodraska, D.; Pankratz, C. K.
2017-12-01
The New LASP Interactive Solar IRradiance Datacenter (LISIRD) The University of Colorado at Boulder's Laboratory for Atmospheric and Space Physics (LASP) has a long history of providing state of the art Solar instrumentation and datasets to the community. In 2005, LASP created a web interface called LISIRD which provided plotting of and access to a number of Solar Irradiance measured and modeled datasets, and it has been used extensively by members of the community both within and outside of LASP. In August of 2017, LASP is set to release a new version of LISIRD for use by anyone interested in viewing and downloading the datasets it serves. This talk will describe the new LISIRD with emphasis on features enabled by it to include: New and more functional plotting interfaces Better dataset browse and search capabilities More datasets Easier to add datasets from a wider array of resources Cleaner interface with better use of screen real estate Much easier to update metadata describing each dataset Much of this capability is leveraged off new infrastructure that will also be touched upon.
Miao, Wang; Luo, Jun; Di Lucente, Stefano; Dorren, Harm; Calabretta, Nicola
2014-02-10
We propose and demonstrate an optical flat datacenter network based on scalable optical switch system with optical flow control. Modular structure with distributed control results in port-count independent optical switch reconfiguration time. RF tone in-band labeling technique allowing parallel processing of the label bits ensures the low latency operation regardless of the switch port-count. Hardware flow control is conducted at optical level by re-using the label wavelength without occupying extra bandwidth, space, and network resources which further improves the performance of latency within a simple structure. Dynamic switching including multicasting operation is validated for a 4 x 4 system. Error free operation of 40 Gb/s data packets has been achieved with only 1 dB penalty. The system could handle an input load up to 0.5 providing a packet loss lower that 10(-5) and an average latency less that 500 ns when a buffer size of 16 packets is employed. Investigation on scalability also indicates that the proposed system could potentially scale up to large port count with limited power penalty.
Ephedrine QoS: An Antidote to Slow, Congested, Bufferless NoCs
Fang, Juan; Yao, Zhicheng; Sui, Xiufeng; Bao, Yungang
2014-01-01
Datacenters consolidate diverse applications to improve utilization. However when multiple applications are colocated on such platforms, contention for shared resources like networks-on-chip (NoCs) can degrade the performance of latency-critical online services (high-priority applications). Recently proposed bufferless NoCs (Nychis et al.) have the advantages of requiring less area and power, but they pose challenges in quality-of-service (QoS) support, which usually relies on buffer-based virtual channels (VCs). We propose QBLESS, a QoS-aware bufferless NoC scheme for datacenters. QBLESS consists of two components: a routing mechanism (QBLESS-R) that can substantially reduce flit deflection for high-priority applications and a congestion-control mechanism (QBLESS-CC) that guarantees performance for high-priority applications and improves overall system throughput. We use trace-driven simulation to model a 64-core system, finding that, when compared to BLESS, a previous state-of-the-art bufferless NoC design, QBLESS, improves performance of high-priority applications by an average of 33.2% and reduces network-hops by an average of 42.8%. PMID:25250386
NASA Astrophysics Data System (ADS)
Huang, Haibin; Guo, Bingli; Li, Xin; Yin, Shan; Zhou, Yu; Huang, Shanguo
2017-12-01
Virtualization of datacenter (DC) infrastructures enables infrastructure providers (InPs) to provide novel services like virtual networks (VNs). Furthermore, optical networks have been employed to connect the metro-scale geographically distributed DCs. The synergistic virtualization of the DC infrastructures and optical networks enables the efficient VN service over inter-DC optical networks (inter-DCONs). While the capacity of the used standard single-mode fiber (SSMF) is limited by their nonlinear characteristics. Thus, mode-division multiplexing (MDM) technology based on few-mode fibers (FMFs) could be employed to increase the capacity of optical networks. Whereas, modal crosstalk (XT) introduced by optical fibers and components deployed in the MDM optical networks impacts the performance of VN embedding (VNE) over inter-DCONs with FMFs. In this paper, we propose a XT-aware VNE mechanism over inter-DCONs with FMFs. The impact of XT is considered throughout the VNE procedures. The simulation results show that the proposed XT-aware VNE can achieves better performances of blocking probability and spectrum utilization compared to conventional VNE mechanisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, K; Freeman, J; Zavalkovskiy, B
Purpose: Situated 20 miles from 5 major fault lines in California’s Bay Area, Stanford Universityhas a critical need for IT infrastructure planning to handle the high probability devastating earthquakes. Recently, a multi-million dollar project has been underway to overhaul Stanford’s radiation oncology information systems, maximizing planning system performance and providing true disaster recovery abilities. An overview of the project will be given with particular focus on lessons learned throughout the build. Methods: In this implementation, two isolated external datacenters provide geographical redundancy to Stanford’s main campus datacenter. Real-time mirroring is made of all data stored to our serial attached networkmore » (SAN) storage. In each datacenter, hardware/software virtualization was heavily implemented to maximize server efficiency and provide a robust mechanism to seamlessly migrate users in the event of an earthquake. System performance is routinely assessed through the use of virtualized data robots, able to log in to the system at scheduled times, perform routine planning tasks and report timing results to a performance dashboard. A substantial dose calculation framework (608 CPU cores) has been constructed as part of the implementation. Results: Migration to a virtualized server environment with a high performance SAN has resulted in up to a 45% speed up of common treatment planning tasks. Switching to a 608 core DCF has resulted in a 280% speed increase in dose calculations. Server tuning was found to further improved read/write performance by 20%. Disaster recovery tests are carried out quarterly and, although successful, remain time consuming to perform and verify functionality. Conclusion: Achieving true disaster recovery capabilities is possible through server virtualization, support from skilled IT staff and leadership. Substantial performance improvements are also achievable through careful tuning of server resources and disk read/write operations. Developing a streamlined method to comprehensively test failover is a key requirement to the system’s success.« less
NASA Astrophysics Data System (ADS)
Ware Dewolfe, A.; Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M.; Woods, T. N.
2009-12-01
The Laboratory for Atmospheric and Space Physics (LASP) is enhancing the LASP Interactive Solar IRradiance Datacenter (LISIRD) to provide access to a comprehensive set of solar spectral irradiance measurements. LISIRD has recently been updated to serve many new datasets and models, including sunspot index, photometric sunspot index, Lyman-alpha, and magnesium-II core-to-wing ratio. A new user interface emphasizes web-based interactive visualizations, allowing users to explore and compare this data before downloading it for analysis. The data provided covers a wavelength range from soft X-ray (XUV) at 0.1 nm up to the near infrared (NIR) at 2400 nm, as well as wavelength-independent Total Solar Irradiance (TSI). Combined data from the SORCE, TIMED-SEE, UARS-SOLSTICE, and SME instruments provide almost continuous coverage from 1981 to the present, while Hydrogen Lyman-alpha (121.6 nm) measurements / models date from 1947 to the present. This poster provides an overview of the LISIRD system, summarizes the data sets currently available, describes future plans and capabilities, and provides details on how to access solar irradiance data through LISIRD interfaces at http://lasp.colorado.edu/lisird/.
Solar Irradiance Data Products at the LASP Interactive Solar IRradiance Datacenter (LISIRD)
NASA Astrophysics Data System (ADS)
Ware Dewolfe, A.; Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.
2010-12-01
The Laboratory for Atmospheric and Space Physics (LASP) has developed the LASP Interactive Solar IRradiance Datacenter (LISIRD) to provide access to a comprehensive set of solar irradiance measurements. LISIRD has recently been updated to serve many new datasets and models, including data from SORCE, UARS-SOLSTICE, SME, and TIMED-SEE, and model data from the Flare Irradiance Spectral Model (FISM). The user interface emphasizes web-based interactive visualizations, allowing users to explore and compare this data before downloading it for analysis. The data provided covers a wavelength range from soft X-ray (XUV) at 0.1 nm up to the near infrared (NIR) at 2400 nm, as well as wavelength-independent Total Solar Irradiance (TSI). Combined data from the SORCE, TIMED-SEE, UARS-SOLSTICE, and SME instruments provide continuous coverage from 1981 to the present, while Lyman-alpha measurements, FISM daily data, and TSI models date from the 1940s to the present. LISIRD will also host Glory TSI data as part of the SORCE data system. This poster provides an overview of the LISIRD system, summarizes the data sets currently available, describes future plans and capabilities, and provides details on how to access solar irradiance data through LISIRD’s interfaces.
Local storage federation through XRootD architecture for interactive distributed analysis
NASA Astrophysics Data System (ADS)
Colamaria, F.; Colella, D.; Donvito, G.; Elia, D.; Franco, A.; Luparello, G.; Maggi, G.; Miniello, G.; Vallero, S.; Vino, G.
2015-12-01
A cloud-based Virtual Analysis Facility (VAF) for the ALICE experiment at the LHC has been deployed in Bari. Similar facilities are currently running in other Italian sites with the aim to create a federation of interoperating farms able to provide their computing resources for interactive distributed analysis. The use of cloud technology, along with elastic provisioning of computing resources as an alternative to the grid for running data intensive analyses, is the main challenge of these facilities. One of the crucial aspects of the user-driven analysis execution is the data access. A local storage facility has the disadvantage that the stored data can be accessed only locally, i.e. from within the single VAF. To overcome such a limitation a federated infrastructure, which provides full access to all the data belonging to the federation independently from the site where they are stored, has been set up. The federation architecture exploits both cloud computing and XRootD technologies, in order to provide a dynamic, easy-to-use and well performing solution for data handling. It should allow the users to store the files and efficiently retrieve the data, since it implements a dynamic distributed cache among many datacenters in Italy connected to one another through the high-bandwidth national network. Details on the preliminary architecture implementation and performance studies are discussed.
ArrayBridge: Interweaving declarative array processing with high-performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Haoyuan; Floratos, Sofoklis; Blanas, Spyros
Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aimsmore » to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.« less
NASA Astrophysics Data System (ADS)
Dabos, G.; Pitris, S.; Mitsolidou, C.; Alexoudi, T.; Fitsios, D.; Cherchi, M.; Harjanne, M.; Aalto, T.; Kanellos, G. T.; Pleros, N.
2017-02-01
As data centers constantly expand, electronic switches are facing the challenge of enhanced scalability and the request for increased pin-count and bandwidth. Photonic technology and wavelength division multiplexing have always been a strong alternative for efficient routing and their potential was already proven in the telecoms. CWDM transceivers have emerged in the board-to-board level interconnection, revealing the potential for wavelength-routing to be applied in the datacom and an AWGR-based approach has recently been proposed towards building an optical multi-socket interconnection to offer any-to-any connectivity with high aggregated throughput and reduced power consumption. Echelle gratings have long been recognized as the multiplexing block exhibiting smallest footprint and robustness in a wide number of applications compared to other alternatives such as the Arrayed Waveguide Grating. Such filtering devices can also perform in a similar way to cyclical AWGR and serve as mid-board routing platforms in multi-socket environments. In this communication, we present such a 3x3 Echelle grating integrated on thick SOI platform with aluminum-coated facets that is shown to perform successful wavelength-routing functionality at 10 Gb/s. The device exhibits a footprint of 60x270 μm2, while the static characterization showed a 3 dB on-chip loss for the best channel. The 3 dB-bandwidth of the channels was 4.5 nm and the free spectral range was 90 nm. The echelle was evaluated in a 2x2 wavelength routing topology, exhibiting a power penalty of below 0.4 dB at 10-9 BER for the C-band. Further experimental evaluations of the platform involve commercially available CWDM datacenter transceivers, towards emulating an optically-interconnected multi-socket environment traffic scenario.
FastLane: Agile Drop Notification for Datacenter Networks
2013-10-23
interactivity dead- lines mean that networks are increasingly evaluated on high percentile completion times of these short flows. Achieving consistent flow...triggering retransmissions based on out-of-order delivery. We evaluate FastLane in a number of scenarios, us- ing both testbed experiments and simulations...Results from our evaluation demonstrate that FastLane improves the 99.9th percentile completion time of short flows by up to 75% compared to TCP
Solar Irradiance Data Products at the LASP Interactive Solar IRradiance Datacenter (LISIRD)
NASA Astrophysics Data System (ADS)
Lindholm, D. M.; Ware DeWolfe, A.; Wilson, A.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.
2011-12-01
The Laboratory for Atmospheric and Space Physics (LASP) has developed the LASP Interactive Solar IRradiance Datacenter (LISIRD, http://lasp.colorado.edu/lisird/) web site to provide access to a comprehensive set of solar irradiance measurements and related datasets. Current data holdings include products from NASA missions SORCE, UARS, SME, and TIMED-SEE. The data provided covers a wavelength range from soft X-ray (XUV) at 0.1 nm up to the near infrared (NIR) at 2400 nm, as well as Total Solar Irradiance (TSI). Other datasets include solar indices, spectral and flare models, solar images, and more. The LISIRD web site features updated plotting, browsing, and download capabilities enabled by dygraphs, JavaScript, and Ajax calls to the LASP Time Series Server (LaTiS). In addition to the web browser interface, most of the LISIRD datasets can be accessed via the LaTiS web service interface that supports the OPeNDAP standard. OPeNDAP clients and other programming APIs are available for making requests that subset, aggregate, or filter data on the server before it is transported to the user. This poster provides an overview of the LISIRD system, summarizes the datasets currently available, and provides details on how to access solar irradiance data products through LISIRD's interfaces.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127
GREEN SUPERCOMPUTING IN A DESKTOP BOX
DOE Office of Scientific and Technical Information (OSTI.GOV)
HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY
2007-01-17
The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less
2011-01-01
we propose that hot-spot mitigation using thermoelectric coolers can be used as a power management mechanism to allow global coolers to be provi...sioned for a better worst case temperature leading to substan- tial savings in cooling power. In order to quantify the potential power savings from us- ing...energy density inside a processor to maximally tolerable levels, modern microprocessors make ex- tensive use of hardware structures such as the load
Planetary-Scale Geospatial Data Analysis Techniques in Google's Earth Engine Platform (Invited)
NASA Astrophysics Data System (ADS)
Hancher, M.
2013-12-01
Geoscientists have more and more access to new tools for large-scale computing. With any tool, some tasks are easy and other tasks hard. It is natural to look to new computing platforms to increase the scale and efficiency of existing techniques, but there is a more exiting opportunity to discover and develop a new vocabulary of fundamental analysis idioms that are made easy and effective by these new tools. Google's Earth Engine platform is a cloud computing environment for earth data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog includes a nearly complete archive of scenes from Landsat 4, 5, 7, and 8 that have been processed by the USGS, as well as a wide variety of other remotely-sensed and ancillary data products. Earth Engine supports a just-in-time computation model that enables real-time preview during algorithm development and debugging as well as during experimental data analysis and open-ended data exploration. Data processing operations are performed in parallel across many computers in Google's datacenters. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, resampling, and associating image metadata with pixel data. Early applications of Earth Engine have included the development of Google's global cloud-free fifteen-meter base map and global multi-decadal time-lapse animations, as well as numerous large and small experimental analyses by scientists from a range of academic, government, and non-governmental institutions, working in a wide variety of application areas including forestry, agriculture, urban mapping, and species habitat modeling. Patterns in the successes and failures of these early efforts have begun to emerge, sketching the outlines of a new set of simple and effective approaches to geospatial data analysis.
Towards energy-efficient photonic interconnects
NASA Astrophysics Data System (ADS)
Demir, Yigit; Hardavellas, Nikos
2015-03-01
Silicon photonics have emerged as a promising solution to meet the growing demand for high-bandwidth, low-latency, and energy-efficient on-chip and off-chip communication in many-core processors. However, current silicon-photonic interconnect designs for many-core processors waste a significant amount of power because (a) lasers are always on, even during periods of interconnect inactivity, and (b) microring resonators employ heaters which consume a significant amount of power just to overcome thermal variations and maintain communication on the photonic links, especially in a 3D-stacked design. The problem of high laser power consumption is particularly important as lasers typically have very low energy efficiency, and photonic interconnects often remain underutilized both in scientific computing (compute-intensive execution phases underutilize the interconnect), and in server computing (servers in Google-scale datacenters have a typical utilization of less than 30%). We address the high laser power consumption by proposing EcoLaser+, which is a laser control scheme that saves energy by predicting the interconnect activity and opportunistically turning the on-chip laser off when possible, and also by scaling the width of the communication link based on a runtime prediction of the expected message length. Our laser control scheme can save up to 62 - 92% of the laser energy, and improve the energy efficiency of a manycore processor with negligible performance penalty. We address the high trimming (heating) power consumption of the microrings by proposing insulation methods that reduce the impact of localized heating induced by highly-active components on the 3D-stacked logic die.
Round, Jennifer E; Campbell, A Malcolm
2013-01-01
The ability to interpret experimental data is essential to understanding and participating in the process of scientific discovery. Reading primary research articles can be a frustrating experience for undergraduate biology students because they have very little experience interpreting data. To enhance their data interpretation skills, students used a template called "Figure Facts" to assist them with primary literature-based reading assignments in an advanced cellular neuroscience course. The Figure Facts template encourages students to adopt a data-centric approach, rather than a text-based approach, to understand research articles. Specifically, Figure Facts requires students to focus on the experimental data presented in each figure and identify specific conclusions that may be drawn from those results. Students who used Figure Facts for one semester increased the amount of time they spent examining figures in a primary research article, and regular exposure to primary literature was associated with improved student performance on a data interpretation skills test. Students reported decreased frustration associated with interpreting data figures, and their opinions of the Figure Facts template were overwhelmingly positive. In this paper, we present Figure Facts for others to adopt and adapt, with reflection on its implementation and effectiveness in improving undergraduate science education.
Importance-satisfaction analysis for primary care physicians' perspective on EHRs in Taiwan.
Ho, Cheng-Hsun; Wene, Hsyien-Chia; Chu, Chi-Ming; Wu, Yi-Syuan; Wang, Jen-Leng
2014-06-06
The Taiwan government has been promoting Electronic Health Records (EHRs) to primary care physicians. How to extend EHRs adoption rate by measuring physicians' perspective of importance and performance of EHRs has become one of the critical issues for healthcare organizations. We conducted a comprehensive survey in 2010 in which a total of 1034 questionnaires which were distributed to primary care physicians. The project was sponsored by the Department of Health to accelerate the adoption of EHRs. 556 valid responses were analyzed resulting in a valid response rate of 53.77%. The data were analyzed based on a data-centered analytical framework (5-point Likert scale). The mean of importance and satisfaction of four dimensions were 4.16, 3.44 (installation and maintenance), 4.12, 3.51 (product effectiveness), 4.10, 3.31 (system function) and 4.34, 3.70 (customer service) respectively. This study provided a direction to government by focusing on attributes which physicians found important but were dissatisfied with, to close the gap between actual and expected performance of the EHRs. The authorities should emphasize the potential advantages in meaningful use and provide training programs, conferences, technical assistance and incentives to enhance the national level implementation of EHRs for primary physicians.
Demonstration of fully enabled data center subsystem with embedded optical interconnect
NASA Astrophysics Data System (ADS)
Pitwon, Richard; Worrall, Alex; Stevens, Paul; Miller, Allen; Wang, Kai; Schmidtke, Katharine
2014-03-01
The evolution of data storage communication protocols and corresponding in-system bandwidth densities is set to impose prohibitive cost and performance constraints on future data storage system designs, fuelling proposals for hybrid electronic and optical architectures in data centers. The migration of optical interconnect into the system enclosure itself can substantially mitigate the communications bottlenecks resulting from both the increase in data rate and internal interconnect link lengths. In order to assess the viability of embedding optical links within prevailing data storage architectures, we present the design and assembly of a fully operational data storage array platform, in which all internal high speed links have been implemented optically. This required the deployment of mid-board optical transceivers, an electro-optical midplane and proprietary pluggable optical connectors for storage devices. We present the design of a high density optical layout to accommodate the midplane interconnect requirements of a data storage enclosure with support for 24 Small Form Factor (SFF) solid state or rotating disk drives and the design of a proprietary optical connector and interface cards, enabling standard drives to be plugged into an electro-optical midplane. Crucially, we have also modified the platform to accommodate longer optical interconnect lengths up to 50 meters in order to investigate future datacenter architectures based on disaggregation of modular subsystems. The optically enabled data storage system has been fully validated for both 6 Gb/s and 12 Gb/s SAS data traffic conveyed along internal optical links.
Giovanni in the Cloud: Earth Science Data Exploration in Amazon Web Services
NASA Astrophysics Data System (ADS)
Hegde, M.; Petrenko, M.; Smit, C.; Zhang, H.; Pilone, P.; Zasorin, A. A.; Pham, L.
2017-12-01
Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a popular online data exploration tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC), providing 22 analysis and visualization services for over 1600 Earth Science data variables. Owing to its popularity, Giovanni has experienced a consistent growth in overall demand, with periodic usage spikes attributed to trainings by education organizations, extensive data analysis in response to natural disasters, preparations for science meetings, etc. Furthermore, the new generation of spaceborne sensors and high resolution models have resulted in an exponential growth in data volume with data distributed across the traditional boundaries of datacenters. Seamless exploration of data (without users having to worry about data center boundaries) has been a key recommendation of the GES DISC User Working Group. These factors have required new strategies for delivering acceptable performance. The cloud-based Giovanni, built on Amazon Web Services (AWS), evaluates (1) AWS native solutions to provide a scalable, serverless architecture; (2) open standards for data storage in the Cloud; (3) a cost model for operations; and (4) end-user performance. Our preliminary findings indicate that the use of serverless architecture has a potential to significantly reduce development and operational cost of Giovanni. The combination of using AWS managed services, storage of data in open standards, and schema-on-read data access strategy simplifies data access and analytics, in addition to making data more accessible to the end users of Giovanni through popular programming languages.
Problems of the active tectonics of the Eastern Black Sea
NASA Astrophysics Data System (ADS)
Javakhishvili, Z.; Godoladze, T.; Dreger, D. S.; Mikava, D.; Tvaliashvili, A.
2016-12-01
The Black Sea Basin is the part of the Arabian Eurasian Collision zone and important unit for understanding the tectonic process of the region. This complex basin comprises two deep basins, separated by the mid-Black Sea Ridge. The basement of the Black Sea includes areas with oceanic and continental crust. It was formed as a "back-arc" basin over the subduction zone during the closing of the Tethys Ocean. In the past decades the Black Sea has been the subject of intense geological and geophysical studies. Several papers were published about the geological history, tectonics, basement relief and crustal and upper mantle structure of the basin. New tectonic schemes were suggested (e. g. Nikishin et al 2014, Shillington et al. 2008, Starostenko et al. 2004 etc.). Nevertheless, seismicity of the Black Sea is poorly studied due to the lack of seismic network in the coastal area. It is considered, that the eastern basin currently lies in a compressional setting associated with the uplift of the Caucasus and structural development of the Caucasus was closely related to the evolution of the Eastern Black Sea Basin. Analyses of recent sequence of earthquakes in 2012 can provide useful information to understand complex tectonic structure of the Eastern Black Sea region. Right after the earthquake of 2012/12/23, National Seismic monitoring center of Georgia deployed additional 4 stations in the coastal area of the country, close to the epicenter area, to monitor aftershock sequence. Seismic activity in the epicentral area is continuing until now. We have relocated approximately 1200 aftershocks to delineate fault scarf using data from Georgian, Turkish and Russian datacenters. Waveforms of the major events and the aftershocks were inverted for the fault plane solutions of the events. For the inversion were used green's functions, computed using new 1D velocity model of the region. Strike-slip mechanism of the major events of the earthquake sequence indicates extensional features in the Eastern Black Sea Region as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckman, Todd J.; Hertzel, Ali K.; Lane, James J.
In 2013, the U.S. Department of Energy's (DOE) Hanford Site, located in Washington State, funded an update to the critical network infrastructure supporting the Hanford Federal Cloud (HFC). The project, called ET-50, was the final step in a plan that was initiated five years ago called "Hanford's IT Vision, 2015 and Beyond." The ET-50 project upgraded Hanford's core data center switches and routers along with a majority of the distribution layer switches. The upgrades allowed HFC the network intelligence to provide Hanford with a more reliable and resilient network architecture. The culmination of the five year plan improved network intelligencemore » and high performance computing as well as helped to provide 10 Gbps capable links between core backbone devices (10 times the previous bandwidth). These improvements allow Hanford the ability to further support bandwidth intense applications, such as video teleconferencing. The ET-50 switch upgrade, along with other upgrades implemented from the five year plan, have prepared Hanford's network for the next evolution of technology in voice, video, and data. Hand-in-hand with ET-50's major data center outage, Mission Support Alliance's (MSA) Information Management (IM) organization executed a disaster recovery (DR) exercise to perform a true integration test and capability study. The DR scope was planned within the constraints of ET-50's 14 hour datacenter outage window. This DR exercise tested Hanford's Continuity of Operations (COOP) capability and failover plans for safety and business critical Hanford Federal Cloud applications. The planned suite of services to be tested was identified prior to the outage and plans were prepared to test the services ability to failover from the primary Hanford data center to the backup data center. The services tested were: Core Network (backbone, firewall, load balancers); Voicemail; Voice over IP (VoIP); Emergency Notification; Virtual desktops; and, Select set of production applications and data. The primary objective of the exercise was to test COOP around the emergency operations at Hanford to provide information on capabilities and dependencies of the current system to insure improved focus of emergency, safety and security capacity in a disaster situation. The integration of the DR test into the ET-50 project allowed the testing of COOP at Hanford and allowed the lessons learned to be defined. These lessons learned have helped improve the understanding of Hanford's COOP capabilities and will be critical for future planning. With the completion of the Hanford Federal Cloud network upgrades and the disaster recovery exercise, the MSA has a clearer path forward for future technology implementations as well as network improvements to help shape the usability and reliability of the Hanford network in support of the cleanup mission.« less
The adoption of IT security standards in a healthcare environment.
Gomes, Rui; Lapão, Luís Velez
2008-01-01
Security is a vital part of daily life to Hospitals that need to ensure that the information is adequately secured. In Portugal, more CIOs are seeking that their hospital IS departments are properly protecting information assets from security threats. It is imperative to take necessary measures to ensure risk management and business continuity. Security management certification provides just such a guarantee, increasing patient and partner confidence. This paper introduces one best practice for implementing four security controls in a hospital datacenter infrastructure (ISO27002), and describes the security assessment for implementing such controls.
Multi terabits/s optical access transport technologies
NASA Astrophysics Data System (ADS)
Binh, Le Nguyen; Wang Tao, Thomas; Livshits, Daniil; Gubenko, Alexey; Karinou, Fotini; Liu Ning, Gordon; Shkolnik, Alexey
2016-02-01
Tremendous efforts have been developed for multi-Tbps over ultra-long distance and metro and access optical networks. With the exponential increase demand on data transmission, storage and serving, especially the 5G wireless access scenarios, the optical Internet networking has evolved to data-center based optical networks pressuring on novel and economical access transmission systems. This paper reports (1) Experimental platforms and transmission techniques employing band-limited optical components operating at 10G for 100G based at 28G baud. Advanced modulation formats such as PAM-4, DMT, duo-binary etc are reported and their advantages and disadvantages are analyzed so as to achieve multi-Tbps optical transmission systems for access inter- and intra- data-centered-based networks; (2) Integrated multi-Tbps combining comb laser sources and micro-ring modulators meeting the required performance for access systems are reported. Ten-sub-carrier quantum dot com lasers are employed in association with wideband optical intensity modulators to demonstrate the feasibility of such sources and integrated micro-ring modulators acting as a combined function of demultiplexing/multiplexing and modulation, hence compactness and economy scale. Under the use of multi-level modulation and direct detection at 56 GBd an aggregate of higher than 2Tbps and even 3Tbps can be achieved by interleaved two comb lasers of 16 sub-carrier lines; (3) Finally the fundamental designs of ultra-compacts flexible filters and switching integrated components based on Si photonics for multi Tera-bps active interconnection are presented. Experimental results on multi-channels transmissions and performances of optical switching matrices and effects on that of data channels are proposed.
SDTCP: Towards Datacenter TCP Congestion Control with SDN for IoT Applications.
Lu, Yifei; Ling, Zhen; Zhu, Shuhong; Tang, Ling
2017-01-08
The Internet of Things (IoT) has gained popularity in recent years. Today's IoT applications are now increasingly deployed in cloud platforms to perform Big Data analytics. In cloud data center networks (DCN), TCP incast usually happens when multiple senders simultaneously communicate with a single receiver. However, when TCP incast happens, DCN may suffer from both throughput collapse for TCP burst flows and temporary starvation for TCP background flows. In this paper, we propose a software defined network (SDN)-based TCP congestion control mechanism, referred to as SDTCP, to leverage the features, e.g., centralized control methods and the global view of the network, in order to solve the TCP incast problems. When we detect network congestion on an OpenFlow switch, our controller can select the background flows and reduce their bandwidth by adjusting the advertised window of TCP ACK packets of the corresponding background flows so as to reserve more bandwidth for burst flows. SDTCP is transparent to the end systems and can accurately decelerate the rate of background flows by leveraging the global view of the network gained via SDN. The experiments demonstrate that our SDTCP can provide high tolerance for burst flows and achieve better flow completion time for short flows. Therefore, SDTCP is an effective and scalable solution for the TCP incast problem.
NASA Astrophysics Data System (ADS)
Smith, R.; Kasprzyk, J. R.; Zagona, E. A.
2013-12-01
Population growth and climate change, combined with difficulties in building new infrastructure, motivate portfolio-based solutions to ensuring sufficient water supply. Powerful simulation models with graphical user interfaces (GUI) are often used to evaluate infrastructure portfolios; these GUI based models require manual modification of the system parameters, such as reservoir operation rules, water transfer schemes, or system capacities. Multiobjective evolutionary algorithm (MOEA) based optimization can be employed to balance multiple objectives and automatically suggest designs for infrastructure systems, but MOEA based decision support typically uses a fixed problem formulation (i.e., a single set of objectives, decisions, and constraints). This presentation suggests a dynamic framework for linking GUI-based infrastructure models with MOEA search. The framework begins with an initial formulation which is solved using a MOEA. Then, stakeholders can interact with candidate solutions, viewing their properties in the GUI model. This is followed by changes in the formulation which represent users' evolving understanding of exigent system properties. Our case study is built using RiverWare, an object-oriented, data-centered model that facilitates the representation of a diverse array of water resources systems. Results suggest that assumptions within the initial MOEA search are violated after investigating tradeoffs and reveal how formulations should be modified to better capture stakeholders' preferences.
Round, Jennifer E.; Campbell, A. Malcolm
2013-01-01
The ability to interpret experimental data is essential to understanding and participating in the process of scientific discovery. Reading primary research articles can be a frustrating experience for undergraduate biology students because they have very little experience interpreting data. To enhance their data interpretation skills, students used a template called “Figure Facts” to assist them with primary literature–based reading assignments in an advanced cellular neuroscience course. The Figure Facts template encourages students to adopt a data-centric approach, rather than a text-based approach, to understand research articles. Specifically, Figure Facts requires students to focus on the experimental data presented in each figure and identify specific conclusions that may be drawn from those results. Students who used Figure Facts for one semester increased the amount of time they spent examining figures in a primary research article, and regular exposure to primary literature was associated with improved student performance on a data interpretation skills test. Students reported decreased frustration associated with interpreting data figures, and their opinions of the Figure Facts template were overwhelmingly positive. In this paper, we present Figure Facts for others to adopt and adapt, with reflection on its implementation and effectiveness in improving undergraduate science education. PMID:23463227
Importance-Satisfaction Analysis for Primary Care Physicians’ Perspective on EHRs in Taiwan †
Ho, Cheng-Hsun; Wen, Hsyien-Chia; Chu, Chi-Ming; Wu, Yi-Syuan; Wang, Jen-Leng
2014-01-01
The Taiwan government has been promoting Electronic Health Records (EHRs) to primary care physicians. How to extend EHRs adoption rate by measuring physicians’ perspective of importance and performance of EHRs has become one of the critical issues for healthcare organizations. We conducted a comprehensive survey in 2010 in which a total of 1034 questionnaires which were distributed to primary care physicians. The project was sponsored by the Department of Health to accelerate the adoption of EHRs. 556 valid responses were analyzed resulting in a valid response rate of 53.77%. The data were analyzed based on a data-centered analytical framework (5-point Likert scale). The mean of importance and satisfaction of four dimensions were 4.16, 3.44 (installation and maintenance), 4.12, 3.51 (product effectiveness), 4.10, 3.31 (system function) and 4.34, 3.70 (customer service) respectively. This study provided a direction to government by focusing on attributes which physicians found important but were dissatisfied with, to close the gap between actual and expected performance of the EHRs. The authorities should emphasize the potential advantages in meaningful use and provide training programs, conferences, technical assistance and incentives to enhance the national level implementation of EHRs for primary physicians. PMID:24914640
Toward Precision Healthcare: Context and Mathematical Challenges
Colijn, Caroline; Jones, Nick; Johnston, Iain G.; Yaliraki, Sophia; Barahona, Mauricio
2017-01-01
Precision medicine refers to the idea of delivering the right treatment to the right patient at the right time, usually with a focus on a data-centered approach to this task. In this perspective piece, we use the term “precision healthcare” to describe the development of precision approaches that bridge from the individual to the population, taking advantage of individual-level data, but also taking the social context into account. These problems give rise to a broad spectrum of technical, scientific, policy, ethical and social challenges, and new mathematical techniques will be required to meet them. To ensure that the science underpinning “precision” is robust, interpretable and well-suited to meet the policy, ethical and social questions that such approaches raise, the mathematical methods for data analysis should be transparent, robust, and able to adapt to errors and uncertainties. In particular, precision methodologies should capture the complexity of data, yet produce tractable descriptions at the relevant resolution while preserving intelligibility and traceability, so that they can be used by practitioners to aid decision-making. Through several case studies in this domain of precision healthcare, we argue that this vision requires the development of new mathematical frameworks, both in modeling and in data analysis and interpretation. PMID:28377724
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norman, Justin; Kennedy, M. J.; Selvidge, Jennifer
High performance III-V lasers at datacom and telecom wavelengths on on-axis (001) Si are needed for scalable datacenter interconnect technologies. We demonstrate electrically injected quantum dot lasers grown on on-axis (001) Si patterned with {111} v-grooves lying in the [110] direction. No additional Ge buffers or substrate miscut was used. The active region consists of five InAs/InGaAs dot-in-a-well layers. Here, we achieve continuous wave lasing with thresholds as low as 36 mA and operation up to 80°C.
Norman, Justin; Kennedy, M. J.; Selvidge, Jennifer; ...
2017-02-14
High performance III-V lasers at datacom and telecom wavelengths on on-axis (001) Si are needed for scalable datacenter interconnect technologies. We demonstrate electrically injected quantum dot lasers grown on on-axis (001) Si patterned with {111} v-grooves lying in the [110] direction. No additional Ge buffers or substrate miscut was used. The active region consists of five InAs/InGaAs dot-in-a-well layers. Here, we achieve continuous wave lasing with thresholds as low as 36 mA and operation up to 80°C.
A three phase optimization method for precopy based VM live migration.
Sharma, Sangeeta; Chawla, Meenu
2016-01-01
Virtual machine live migration is a method of moving virtual machine across hosts within a virtualized datacenter. It provides significant benefits for administrator to manage datacenter efficiently. It reduces service interruption by transferring the virtual machine without stopping at source. Transfer of large number of virtual machine memory pages results in long migration time as well as downtime, which also affects the overall system performance. This situation becomes unbearable when migration takes place over slower network or a long distance migration within a cloud. In this paper, precopy based virtual machine live migration method is thoroughly analyzed to trace out the issues responsible for its performance drops. In order to address these issues, this paper proposes three phase optimization (TPO) method. It works in three phases as follows: (i) reduce the transfer of memory pages in first phase, (ii) reduce the transfer of duplicate pages by classifying frequently and non-frequently updated pages, and (iii) reduce the data sent in last iteration of migration by applying the simple RLE compression technique. As a result, each phase significantly reduces total pages transferred, total migration time and downtime respectively. The proposed TPO method is evaluated using different representative workloads on a Xen virtualized environment. Experimental results show that TPO method reduces total pages transferred by 71 %, total migration time by 70 %, downtime by 3 % for higher workload, and it does not impose significant overhead as compared to traditional precopy method. Comparison of TPO method with other methods is also done for supporting and showing its effectiveness. TPO method and precopy methods are also tested at different number of iterations. The TPO method gives better performance even with less number of iterations.
SDTCP: Towards Datacenter TCP Congestion Control with SDN for IoT Applications
Lu, Yifei; Ling, Zhen; Zhu, Shuhong; Tang, Ling
2017-01-01
The Internet of Things (IoT) has gained popularity in recent years. Today’s IoT applications are now increasingly deployed in cloud platforms to perform Big Data analytics. In cloud data center networks (DCN), TCP incast usually happens when multiple senders simultaneously communicate with a single receiver. However, when TCP incast happens, DCN may suffer from both throughput collapse for TCP burst flows and temporary starvation for TCP background flows. In this paper, we propose a software defined network (SDN)-based TCP congestion control mechanism, referred to as SDTCP, to leverage the features, e.g., centralized control methods and the global view of the network, in order to solve the TCP incast problems. When we detect network congestion on an OpenFlow switch, our controller can select the background flows and reduce their bandwidth by adjusting the advertised window of TCP ACK packets of the corresponding background flows so as to reserve more bandwidth for burst flows. SDTCP is transparent to the end systems and can accurately decelerate the rate of background flows by leveraging the global view of the network gained via SDN. The experiments demonstrate that our SDTCP can provide high tolerance for burst flows and achieve better flow completion time for short flows. Therefore, SDTCP is an effective and scalable solution for the TCP incast problem. PMID:28075347
Evaluating architecture impact on system energy efficiency
Yu, Shijie; Wang, Rui; Luan, Zhongzhi; Qian, Depei
2017-01-01
As the energy consumption has been surging in an unsustainable way, it is important to understand the impact of existing architecture designs from energy efficiency perspective, which is especially valuable for High Performance Computing (HPC) and datacenter environment hosting tens of thousands of servers. One obstacle hindering the advance of comprehensive evaluation on energy efficiency is the deficient power measuring approach. Most of the energy study relies on either external power meters or power models, both of these two methods contain intrinsic drawbacks in their practical adoption and measuring accuracy. Fortunately, the advent of Intel Running Average Power Limit (RAPL) interfaces has promoted the power measurement ability into next level, with higher accuracy and finer time resolution. Therefore, we argue it is the exact time to conduct an in-depth evaluation of the existing architecture designs to understand their impact on system energy efficiency. In this paper, we leverage representative benchmark suites including serial and parallel workloads from diverse domains to evaluate the architecture features such as Non Uniform Memory Access (NUMA), Simultaneous Multithreading (SMT) and Turbo Boost. The energy is tracked at subcomponent level such as Central Processing Unit (CPU) cores, uncore components and Dynamic Random-Access Memory (DRAM) through exploiting the power measurement ability exposed by RAPL. The experiments reveal non-intuitive results: 1) the mismatch between local compute and remote memory node caused by NUMA effect not only generates dramatic power and energy surge but also deteriorates the energy efficiency significantly; 2) for multithreaded application such as the Princeton Application Repository for Shared-Memory Computers (PARSEC), most of the workloads benefit a notable increase of energy efficiency using SMT, with more than 40% decline in average power consumption; 3) Turbo Boost is effective to accelerate the workload execution and further preserve the energy, however it may not be applicable on system with tight power budget. PMID:29161317
Evaluating architecture impact on system energy efficiency.
Yu, Shijie; Yang, Hailong; Wang, Rui; Luan, Zhongzhi; Qian, Depei
2017-01-01
As the energy consumption has been surging in an unsustainable way, it is important to understand the impact of existing architecture designs from energy efficiency perspective, which is especially valuable for High Performance Computing (HPC) and datacenter environment hosting tens of thousands of servers. One obstacle hindering the advance of comprehensive evaluation on energy efficiency is the deficient power measuring approach. Most of the energy study relies on either external power meters or power models, both of these two methods contain intrinsic drawbacks in their practical adoption and measuring accuracy. Fortunately, the advent of Intel Running Average Power Limit (RAPL) interfaces has promoted the power measurement ability into next level, with higher accuracy and finer time resolution. Therefore, we argue it is the exact time to conduct an in-depth evaluation of the existing architecture designs to understand their impact on system energy efficiency. In this paper, we leverage representative benchmark suites including serial and parallel workloads from diverse domains to evaluate the architecture features such as Non Uniform Memory Access (NUMA), Simultaneous Multithreading (SMT) and Turbo Boost. The energy is tracked at subcomponent level such as Central Processing Unit (CPU) cores, uncore components and Dynamic Random-Access Memory (DRAM) through exploiting the power measurement ability exposed by RAPL. The experiments reveal non-intuitive results: 1) the mismatch between local compute and remote memory node caused by NUMA effect not only generates dramatic power and energy surge but also deteriorates the energy efficiency significantly; 2) for multithreaded application such as the Princeton Application Repository for Shared-Memory Computers (PARSEC), most of the workloads benefit a notable increase of energy efficiency using SMT, with more than 40% decline in average power consumption; 3) Turbo Boost is effective to accelerate the workload execution and further preserve the energy, however it may not be applicable on system with tight power budget.
Namnabat, Soha; Kim, Kyung-Jo; Jones, Adam; Himmelhuber, Roland; DeRose, Christopher T; Trotter, Douglas C; Starbuck, Andrew L; Pomerene, Andrew; Lentine, Anthony L; Norwood, Robert A
2017-09-04
Silicon photonics has gained interest for its potential to provide higher efficiency, bandwidth and reduced power consumption compared to electrical interconnects in datacenters and high performance computing environments. However, it is well known that silicon photonic devices suffer from temperature fluctuations due to silicon's high thermo-optic coefficient and therefore, temperature control in many applications is required. Here we present an athermal optical add-drop multiplexer fabricated from ring resonators. We used a sol-gel inorganic-organic hybrid material as an alternative to previously used materials such as polymers and titanium dioxide. In this work we studied the thermal curing parameters of the sol-gel and their effect on thermal wavelength shift of the rings. With this method, we were able to demonstrate a thermal shift down to -6.8 pm/°C for transverse electric (TE) polarization in ring resonators with waveguide widths of 325 nm when the sol-gel was cured at 130°C for 10.5 hours. We also achieved thermal shifts below 1 pm/°C for transverse magnetic (TM) polarization in the C band under different curing conditions. Curing time compared to curing temperature shows to be the most important factor to control sol-gel's thermo-optic value in order to obtain an athermal device in a wide temperature range.
Virtualization for the LHCb Online system
NASA Astrophysics Data System (ADS)
Bonaccorsi, Enrico; Brarda, Loic; Moine, Gary; Neufeld, Niko
2011-12-01
Virtualization has long been advertised by the IT-industry as a way to cut down cost, optimise resource usage and manage the complexity in large data-centers. The great number and the huge heterogeneity of hardware, both industrial and custom-made, has up to now led to reluctance in the adoption of virtualization in the IT infrastructure of large experiment installations. Our experience in the LHCb experiment has shown that virtualization improves the availability and the manageability of the whole system. We have done an evaluation of available hypervisors / virtualization solutions and find that the Microsoft HV technology provides a high level of maturity and flexibility for our purpose. We present the results of these comparison tests, describing in detail, the architecture of our virtualization infrastructure with a special emphasis on the security for services visible to the outside world. Security is achieved by a sophisticated combination of VLANs, firewalls and virtual routing - the cost and benefits of this solution are analysed. We have adapted our cluster management tools, notably Quattor, for the needs of virtual machines and this allows us to migrate smoothly services on physical machines to the virtualized infrastructure. The procedures for migration will also be described. In the final part of the document we describe our recent R&D activities aiming to replacing the SAN-backend for the virtualization by a cheaper iSCSI solution - this will allow to move all servers and related services to the virtualized infrastructure, excepting the ones doing hardware control via non-commodity PCI plugin cards.
The long-term Global LAnd Surface Satellite (GLASS) product suite and applications
NASA Astrophysics Data System (ADS)
Liang, S.
2015-12-01
Our Earth's environment is experiencing rapid changes due to natural variability and human activities. To monitor, understand and predict environment changes to meet the economic, social and environmental needs, use of long-term high-quality satellite data products is critical. The Global LAnd Surface Satellite (GLASS) product suite, generated at Beijing Normal University, currently includes 12 products, including leaf area index (LAI), broadband shortwave albedo, broadband longwave emissivity, downwelling shortwave radiation and photosynthetically active radiation, land surface skin temperature, longwave net radiation, daytime all-wave net radiation, fraction of absorbed photosynetically active radiation absorbed by green vegetation (FAPAR), fraction of green vegetation coverage, gross primary productivity (GPP), and evapotranspiration (ET). Most products span from 1981-2014. The algorithms for producing these products have been published in the top remote sensing related journals and books. More and more applications have being reported in the scientific literature. The GLASS products are freely available at the Center for Global Change Data Processing and Analysis of Beijing Normal University (http://www.bnu-datacenter.com/), and the University of Maryland Global Land Cover Facility (http://glcf.umd.edu). After briefly introducing the basic characteristics of GLASS products, we will present some applications on the long-term environmental changes detected from GLASS products at both global and local scales. Detailed analysis of regional hotspots, such as Greenland, Tibetan plateau, and northern China, will be emphasized, where environmental changes have been mainly associated with climate warming, drought, land-atmosphere interactions, and human activities.
High-Performance Computing Data Center | Energy Systems Integration
Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing
Case study: Optimizing fault model input parameters using bio-inspired algorithms
NASA Astrophysics Data System (ADS)
Plucar, Jan; Grunt, Onřej; Zelinka, Ivan
2017-07-01
We present a case study that demonstrates a bio-inspired approach in the process of finding optimal parameters for GSM fault model. This model is constructed using Petri Nets approach it represents dynamic model of GSM network environment in the suburban areas of Ostrava city (Czech Republic). We have been faced with a task of finding optimal parameters for an application that requires high amount of data transfers between the application itself and secure servers located in datacenter. In order to find the optimal set of parameters we employ bio-inspired algorithms such as Differential Evolution (DE) or Self Organizing Migrating Algorithm (SOMA). In this paper we present use of these algorithms, compare results and judge their performance in fault probability mitigation.
Experiences with the ALICE Mesos infrastructure
NASA Astrophysics Data System (ADS)
Berzano, D.; Eulisse, G.; Grigoraş, C.; Napoli, K.
2017-10-01
Apache Mesos is a resource management system for large data centres, initially developed by UC Berkeley, and now maintained under the Apache Foundation umbrella. It is widely used in the industry by companies like Apple, Twitter, and Airbnb and it is known to scale to 10 000s of nodes. Together with other tools of its ecosystem, such as Mesosphere Marathon or Metronome, it provides an end-to-end solution for datacenter operations and a unified way to exploit large distributed systems. We present the experience of the ALICE Experiment Offline & Computing in deploying and using in production the Apache Mesos ecosystem for a variety of tasks on a small 500 cores cluster, using hybrid OpenStack and bare metal resources. We will initially introduce the architecture of our setup and its operation, we will then describe the tasks which are performed by it, including release building and QA, release validation, and simple Monte Carlo production. We will show how we developed Mesos enabled components (called “Mesos Frameworks”) to carry out ALICE specific needs. In particular, we will illustrate our effort to integrate Work Queue, a lightweight batch processing engine developed by University of Notre Dame, which ALICE uses to orchestrate release validation. Finally, we will give an outlook on how to use Mesos as resource manager for DDS, a software deployment system developed by GSI which will be the foundation of the system deployment for ALICE next generation Online-Offline (O2).
Berkeley Lab - Materials Sciences Division
Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion and Materials Physics Scattering and Instrumentation Science Centers Center for Computational Study of Sciences Centers Center for Computational Study of Excited-State Phenomena in Energy Materials Center for X
High-Performance Computing Data Center Warm-Water Liquid Cooling |
Computational Science | NREL Warm-Water Liquid Cooling High-Performance Computing Data Center Warm-Water Liquid Cooling NREL's High-Performance Computing Data Center (HPC Data Center) is liquid water Liquid cooling technologies offer a more energy-efficient solution that also allows for effective
Los Alamos Before and After the Fire
NASA Technical Reports Server (NTRS)
2002-01-01
On May 4, 2000, a prescribed fire was set at Bandelier National Monument, New Mexico, to clear brush and dead and dying undergrowth to prevent a larger, subsequent wildfire. Unfortunately, due to high winds and extremely dry conditions in the surrounding area, the prescribed fire quickly raged out of control and, by May 10, the blaze had spread into the nearby town of Los Alamos. In all, more than 20,000 people were evacuated from their homes and more than 200 houses were destroyed as the flames consumed about 48,000 acres in and around the Los Alamos area. The pair of images above were acquired by the Enhanced Thematic Mapper Plus (ETM+) sensor, flying aboard NASA's Landsat 7 satellite, shortly before the Los Alamos fire (top image, acquired April 14) and shortly after the fire was extinguished (lower image, June 17). The images reveal the extent of the damage caused by the fire. Combining ETM+ channels 7, 4, and 2 (one visible and two infrared channels) results in a false-color image where vegetation appears as bright to dark green. Forested areas are generally dark green while herbaceous vegetation is light green. Rangeland or more open areas appear pink to light purple. Areas with extensive pavement or urban development appear light blue or white to purple. Less densely-developed residential areas appear light green and golf courses are very bright green. In the lower image, the areas recently burned appear bright red. Landsat 7 data courtesy United States Geological Survey EROS DataCenter. Images by Robert Simmon, NASA GSFC.
BEYOND TEXT: USING ARRAYS TO REPRESENT AND ANALYZE ETHNOGRAPHIC DATA
Abramson, Corey M.; Dohan, Daniel
2015-01-01
Recent methodological debates in sociology have focused on how data and analyses might be made more open and accessible, how the process of theorizing and knowledge production might be made more explicit, and how developing means of visualization can help address these issues. In ethnography, where scholars from various traditions do not necessarily share basic epistemological assumptions about the research enterprise with either their quantitative colleagues or one another, these issues are particularly complex. Nevertheless, ethnographers working within the field of sociology face a set of common pragmatic challenges related to managing, analyzing, and presenting the rich context-dependent data generated during fieldwork. Inspired by both ongoing discussions about how sociological research might be made more transparent, as well as innovations in other data-centered fields, the authors developed an interactive visual approach that provides tools for addressing these shared pragmatic challenges. They label the approach “ethnoarray” analysis. This article introduces this approach and explains how it can help scholars address widely shared logistical and technical complexities, while remaining sensitive to both ethnography’s epistemic diversity and its practitioners shared commitment to depth, context, and interpretation. The authors use data from an ethnographic study of serious illness to construct a model of an ethnoarray and explain how such an array might be linked to data repositories to facilitate new forms of analysis, interpretation, and sharing within scholarly and lay communities. They conclude by discussing some potential implications of the ethnoarray and related approaches for the scope, practice, and forms of ethnography. PMID:26834296
Romanian Complex Data Center for Dense Seismic network
NASA Astrophysics Data System (ADS)
Neagoe, Cristian; Ionescu, Constantin; Marius Manea, Liviu
2010-05-01
Since 2002 the National Institute for Earth Physics (NIEP) developed its own real-time digital seismic network: consisting of 96 seismic stations of which 35 are broadband sensors and 24 stations equipped with short period sensors and two arrays earthquakes that transmit data in real time at the National Data Center (NDC) and Eforie Nord (EFOR) Seismic Observatory. EFOR is the back-up for the NDC and also a monitoring center for Black Sea tsunamis. Seismic stations are equipped with Quanterra Q330 and K2 digitizers, broadband seismometers (STS2, CMG40T, CMG 3ESP, CMG3T) and acceleration sensors Episensor Kinemetrics (+ / - 2G). SeedLink who is a part of Seiscomp2.5 and Antelope are software packages used for acquisition in real time (RT) and for data exchange. Communication of digital seismic stations to the National Data Center in Bucharest and Seismic Observatory Eforie Nord is assured by 5 providers (GPRS, VPN, satellite radio and Internet communication). For acquisition and data processing at the two centers of reception and processing is used AntelopeTM 4.11 running on 2 workstations: one for real-time and other for offline processing and also a Seiscomp 3 server that works as back-up for Antelope 4.11 Both acquisition and analysis of seismic data systems produced information about local and global parameters of earthquakes, in addition Antelope is used for manual processing (association events, the calculation of magnitude, creating a database, sending seismic bulletins, calculation of PGA and PGV , etc.), generating ShakeMap products and interacts with global data centers. In order to make all this information easily available across the Web and also lay the grounds for a more modular and flexible development environment the National Data Center developed tools to enable centralizing of data from software such as Antelope which is using a dedicated database system ( Datascope, a database system based on text files ) to a more general-purpose database, MySQL which acts like a hub between the different acquisition and analysis systems used in the data center while also providing better connectivity at no expense in security. Mirroring certain data to MySQL also allows the National Data Center to easily share information to the public via the new application which is being developed and also mix in data collected from the public (e.g. information about the damages observed after an earthquake which intern is being used to produce macroseismic intensity indices which are then stored in the database and also made available via the web application). For internal usage there is also a web application which using data stored in the database displays earthquake information like location, magnitude and depth in semi-real-time thus aiding the personnel on duty. Another usage for the collected data is to create and maintain contact lists to which the datacenter sends notifications (SMS and emails) based on the parameters of the earthquake. For future development, amongst others the Data Center plans to develop the means to crosscheck the generated data between the different acquisition and analysis systems (e.g. comparing data generated by Antelope with data generated by Seiscomp).
Laboratory Computing Resource Center
Systems Computing and Data Resources Purchasing Resources Future Plans For Users Getting Started Using LCRC Software Best Practices and Policies Getting Help Support Laboratory Computing Resource Center Laboratory Computing Resource Center Latest Announcements See All April 27, 2018, Announcements, John Low
The Development of University Computing in Sweden 1965-1985
NASA Astrophysics Data System (ADS)
Dahlstrand, Ingemar
In 1965-70 the government agency, Statskontoret, set up five university computing centers, as service bureaux financed by grants earmarked for computer use. The centers were well equipped and staffed and caused a surge in computer use. When the yearly flow of grant money stagnated at 25 million Swedish crowns, the centers had to find external income to survive and acquire time-sharing. But the charging system led to the computers not being fully used. The computer scientists lacked equipment for laboratory use. The centers were decentralized and the earmarking abolished. Eventually they got new tasks like running computers owned by the departments, and serving the university administration.
NASA Astrophysics Data System (ADS)
Stockton, Gregory R.
2011-05-01
Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.
Computers in aeronautics and space research at the Lewis Research Center
NASA Technical Reports Server (NTRS)
1991-01-01
This brochure presents a general discussion of the role of computers in aerospace research at NASA's Lewis Research Center (LeRC). Four particular areas of computer applications are addressed: computer modeling and simulation, computer assisted engineering, data acquisition and analysis, and computer controlled testing.
User-Centered Computer Aided Language Learning
ERIC Educational Resources Information Center
Zaphiris, Panayiotis, Ed.; Zacharia, Giorgos, Ed.
2006-01-01
In the field of computer aided language learning (CALL), there is a need for emphasizing the importance of the user. "User-Centered Computer Aided Language Learning" presents methodologies, strategies, and design approaches for building interfaces for a user-centered CALL environment, creating a deeper understanding of the opportunities and…
CFD Modeling Activities at the NASA Stennis Space Center
NASA Technical Reports Server (NTRS)
Allgood, Daniel
2007-01-01
A viewgraph presentation on NASA Stennis Space Center's Computational Fluid Dynamics (CFD) Modeling activities is shown. The topics include: 1) Overview of NASA Stennis Space Center; 2) Role of Computational Modeling at NASA-SSC; 3) Computational Modeling Tools and Resources; and 4) CFD Modeling Applications.
Mathematics and Computer Science | Argonne National Laboratory
Genomics and Systems Biology LCRCLaboratory Computing Resource Center MCSGMidwest Center for Structural Genomics NAISENorthwestern-Argonne Institute of Science & Engineering SBCStructural Biology Center
Computer Center Harris 1600 Operator’s Guide.
1982-06-01
RECIPIENT’S CATALOG NUMBER CMLD-82-15 Vb /9 7 ’ 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Computer Center Harris 1600 Operator’s Guide...AD-AIAA 077 DAVID W TAYLOR NAVAL SHIP RESEARCH AND DEVELOPMENT CE--ETC F/G. 5/9 COMPUTER CENTER HARRIS 1600 OPEAATOR’S GUIDE.dU) M JUN 62 D A SOMMER...20084 COMPUTER CENTER HARRIS 1600 OPERATOR’s GUIDE by David V. Sommer & Sharon E. Good APPROVED FOR PUBLIC RELEASE: DISTRIBUTION UNLIMITED ’-.7 SJ0 o 0
NASA Astrophysics Data System (ADS)
Okamoto, Satoru; Sato, Takehiro; Yamanaka, Naoaki
2017-01-01
In this paper, flexible and highly reliable metro and access integrated networks with network virtualization and software defined networking technologies will be presented. Logical optical line terminal (L-OLT) technologies and active optical distribution networks (ODNs) are the key to introduce flexibility and high reliability into the metro and access integrated networks. In the Elastic Lambda Aggregation Network (EλAN) project which was started in 2012, a concept of the programmable optical line terminal (P-OLT) has been proposed. A role of the P-OLT is providing multiple network services that have different protocols and quality of service requirements by single OLT box. Accommodated services will be Internet access, mobile front-haul/back-haul, data-center access, and leased line. L-OLTs are configured within the P-OLT box to support the functions required for each network service. Multiple P-OLTs and programmable optical network units (P-ONUs) are connected by the active ODN. Optical access paths which have flexible capacity are set on the ODN to provide network services from L-OLT to logical ONUs (L-ONUs). The L-OLT to L-ONU path on the active ODN provides a logical connection. Therefore, introducing virtualization technologies becomes possible. One example is moving an L-OLT from one P-OLT to another P-OLT like a virtual machine. This movement is called L-OLT migration. The L-OLT migration provides flexible and reliable network functions such as energy saving by aggregating L-OLTs to a limited number of P-OLTs, and network wide optical access path restoration. Other L-OLT virtualization technologies and experimental results will be also discussed in the paper.
Analog and digital transport of RF channels over converged 5G wireless-optical networks
NASA Astrophysics Data System (ADS)
Binh, Le Nguyen
2016-02-01
Under the exponential increase demand by the emerging 5G wireless access networking and thus data-center based Internet, novel and economical transport of RF channels to and from wireless access systems. This paper presents the transport technologies of RF channels over the analog and digital domain so as to meet the demands of the transport capacity reaching multi-Tbps, in the followings: (i) The convergence of 5G broadband wireless and optical networks and its demands on capacity delivery and network structures; (ii) Analog optical technologies for delivery of both the information and RF carriers to and from multiple-input multiple-output (MIMO) antenna sites so as to control the beam steering of MIMO antenna in the mmW at either 28.6 GHz and 56.8 GHz RF carrier and delivery of channels of aggregate capacity reaching several Tbps; (ii) Transceiver employing advanced digital modulation formats and digital signal processing (DSP) so as to provide 100G and beyond transmission rate to meet the ultra-high capacity demands with flexible spectral grids, hence pay-on-demand services. The interplay between DSP-based and analog transport techniques is examined; (iii) Transport technologies for 5G cloud access networks and associate modulation and digital processing techniques for capacity efficiency; and (iv) Finally the integrated optic technologies with novel lasers, comb generators and simultaneous dual function photonic devices for both demultiplexing/multiplexing and modulation are proposed, hence a system on chip structure can be structured. Quantum dot lasers and matrixes of micro ring resonators are integrated on the same Si-on-Silica substrate are proposed and described.
Taylor, Kyla W; Baird, Donna D; Herring, Amy H; Engel, Lawrence S; Nichols, Hazel B; Sandler, Dale P; Troester, Melissa A
2017-09-01
It is hypothesized that certain chemicals in personal care products may alter the risk of adverse health outcomes. The primary aim of this study was to use a data-centered approach to classify complex patterns of exposure to personal care products and to understand how these patterns vary according to use of exogenous hormone exposures, oral contraceptives (OCs) and post-menopausal hormone therapy (HT). The NIEHS Sister Study is a prospective cohort study of 50,884 US women. Limiting the sample to non-Hispanic blacks and whites (N=47,019), latent class analysis (LCA) was used to identify groups of individuals with similar patterns of personal care product use based on responses to 48 survey questions. Personal care products were categorized into three product types (beauty, hair, and skincare products) and separate latent classes were constructed for each type. Adjusted prevalence differences (PD) were calculated to estimate the association between exogenous hormone use, as measured by ever/never OC or HT use, and patterns of personal care product use. LCA reduced data dimensionality by grouping of individuals with similar patterns of personal care product use into mutually exclusive latent classes (three latent classes for beauty product use, three for hair, and four for skin care. There were strong differences in personal care usage by race, particularly for haircare products. For both blacks and whites, exogenous hormone exposures were associated with higher levels of product use, especially beauty and skincare products. Relative to individual product use questions, latent class variables capture complex patterns of personal care product usage. These patterns differed by race and were associated with ever OC and HT use. Future studies should consider personal care product exposures with other exogenous exposures when modeling health risks.
Taylor, Kyla W.; Baird, Donna D.; Herring, Amy H.; Engel, Lawrence S.; Nichols, Hazel B.; Sandler, Dale P.; Troester, Melissa A.
2017-01-01
It is hypothesized that certain chemicals in personal care products may alter the risk of adverse health outcomes. The primary aim of this study was to use a data-centered approach to classify complex patterns of exposure to personal care products and to understand how these patterns vary according to use of exogenous hormone exposures, oral contraceptives (OCs) and post-menopausal hormone therapy (HT). The NIEHS Sister Study is a prospective cohort study of 50,884 US women. Limiting the sample to non-Hispanic blacks and whites (N = 47,019), latent class analysis (LCA) was used to identify groups of individuals with similar patterns of personal care product use based on responses to 48 survey questions. Personal care products were categorized into three product types (beauty, hair, and skincare products) and separate latent classes were constructed for each type. Adjusted prevalence differences (PD) were calculated to estimate the association between exogenous hormone use, as measured by ever/never OC or HT use, and patterns of personal care product use. LCA reduced data dimensionality by grouping of individuals with similar patterns of personal care product use into mutually exclusive latent classes (three latent classes for beauty product use, three for hair, and four for skin care. There were strong differences in personal care usage by race, particularly for haircare products. For both blacks and whites, exogenous hormone exposures were associated with higher levels of product use, especially beauty and skincare products. Relative to individual product use questions, latent class variables capture complex patterns of personal care product usage. These patterns differed by race and were associated with ever OC and HT use. Future studies should consider personal care product exposures with other exogenous exposures when modeling health risks. PMID:28120835
NASA Astrophysics Data System (ADS)
Webley, P.; Dehn, J.; Dean, K. G.; Macfarlane, S.
2010-12-01
Volcanic eruptions are a global hazard, affecting local infrastructure, impacting airports and hindering the aviation community, as seen in Europe during Spring 2010 from the Eyjafjallajokull eruption in Iceland. Here, we show how remote sensing data is used through web-based interfaces for monitoring volcanic activity, both ground based thermal signals and airborne ash clouds. These ‘web tools’, http://avo.images.alaska.edu/, provide timely availability of polar orbiting and geostationary data from US National Aeronautics and Space Administration, National Oceanic and Atmosphere Administration and Japanese Meteorological Agency satellites for the North Pacific (NOPAC) region. This data is used operationally by the Alaska Volcano Observatory (AVO) for monitoring volcanic activity, especially at remote volcanoes and generates ‘alarms’ of any detected volcanic activity and ash clouds. The webtools allow the remote sensing team of AVO to easily perform their twice daily monitoring shifts. The web tools also assist the National Weather Service, Alaska and Kamchatkan Volcanic Emergency Response Team, Russia in their operational duties. Users are able to detect ash clouds, measure the distance from the source, area and signal strength. Within the web tools, there are 40 x 40 km datasets centered on each volcano and a searchable database of all acquired data from 1993 until present with the ability to produce time series data per volcano. Additionally, a data center illustrates the acquired data across the NOPAC within the last 48 hours, http://avo.images.alaska.edu/tools/datacenter/. We will illustrate new visualization tools allowing users to display the satellite imagery within Google Earth/Maps, and ArcGIS Explorer both as static maps and time-animated imagery. We will show these tools in real-time as well as examples of past large volcanic eruptions. In the future, we will develop the tools to produce real-time ash retrievals, run volcanic ash dispersion models from detected ash clouds and develop the browser interfaces to display other remote sensing datasets, such as volcanic sulfur dioxide detection.
77 FR 34941 - Privacy Act of 1974; Notice of a Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-12
...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, DoD. ACTION: Notice of a... computer matching program are the Department of Veterans Affairs (VA) and the Defense Manpower Data Center... identified as DMDC 01, entitled ``Defense Manpower Data Center Data Base,'' last published in the Federal...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-13
... the Defense Manpower Data Center, Department of Defense AGENCY: Postal Service TM . ACTION: Notice of Computer Matching Program--United States Postal Service and the Defense Manpower Data Center, Department of... as the recipient agency in a computer matching program with the Defense Manpower Data Center (DMDC...
Roadmap of optical communications
NASA Astrophysics Data System (ADS)
Agrell, Erik; Karlsson, Magnus; Chraplyvy, A. R.; Richardson, David J.; Krummrich, Peter M.; Winzer, Peter; Roberts, Kim; Fischer, Johannes Karl; Savory, Seb J.; Eggleton, Benjamin J.; Secondini, Marco; Kschischang, Frank R.; Lord, Andrew; Prat, Josep; Tomkos, Ioannis; Bowers, John E.; Srinivasan, Sudha; Brandt-Pearce, Maïté; Gisin, Nicolas
2016-06-01
Lightwave communications is a necessity for the information age. Optical links provide enormous bandwidth, and the optical fiber is the only medium that can meet the modern society's needs for transporting massive amounts of data over long distances. Applications range from global high-capacity networks, which constitute the backbone of the internet, to the massively parallel interconnects that provide data connectivity inside datacenters and supercomputers. Optical communications is a diverse and rapidly changing field, where experts in photonics, communications, electronics, and signal processing work side by side to meet the ever-increasing demands for higher capacity, lower cost, and lower energy consumption, while adapting the system design to novel services and technologies. Due to the interdisciplinary nature of this rich research field, Journal of Optics has invited 16 researchers, each a world-leading expert in their respective subfields, to contribute a section to this invited review article, summarizing their views on state-of-the-art and future developments in optical communications.
Data Center Consolidation: A Step towards Infrastructure Clouds
NASA Astrophysics Data System (ADS)
Winter, Markus
Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.
Computer Maintenance Operations Center (CMOC), additional computer support equipment ...
Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA
ERIC Educational Resources Information Center
Lin, Che-Li; Liang, Jyh-Chong; Su, Yi-Ching; Tsai, Chin-Chung
2013-01-01
Teacher-centered instruction has been widely adopted in college computer science classrooms and has some benefits in training computer science undergraduates. Meanwhile, student-centered contexts have been advocated to promote computer science education. How computer science learners respond to or prefer the two types of teacher authority,…
Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170174 computers ...
Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170-174 computers - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA
NASA Center for Computational Sciences: History and Resources
NASA Technical Reports Server (NTRS)
2000-01-01
The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.
Center for Computing Research Summer Research Proceedings 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, Andrew Michael; Parks, Michael L.
2015-12-18
The Center for Computing Research (CCR) at Sandia National Laboratories organizes a summer student program each summer, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI).
NASA Astrophysics Data System (ADS)
Szelag, Bertrand; Abraham, Alexis; Brision, Stéphane; Gindre, Paul; Blampey, Benjamin; Myko, André; Olivier, Segolene; Kopp, Christophe
2017-05-01
Silicon photonic is becoming a reality for next generation communication system addressing the increasing needs of HPC (High Performance Computing) systems and datacenters. CMOS compatible photonic platforms are developed in many foundries integrating passive and active devices. The use of existing and qualified microelectronics process guarantees cost efficient and mature photonic technologies. Meanwhile, photonic devices have their own fabrication constraints, not similar to those of cmos devices, which can affect their performances. In this paper, we are addressing the integration of PN junction Mach Zehnder modulator in a 200mm CMOS compatible photonic platform. Implantation based device characteristics are impacted by many process variations among which screening layer thickness, dopant diffusion, implantation mask overlay. CMOS devices are generally quite robust with respect to these processes thanks to dedicated design rules. For photonic devices, the situation is different since, most of the time, doped areas must be carefully located within waveguides and CMOS solutions like self-alignment to the gate cannot be applied. In this work, we present different robust integration solutions for junction-based modulators. A simulation setup has been built in order to optimize of the process conditions. It consist in a Mathlab interface coupling process and device electro-optic simulators in order to run many iterations. Illustrations of modulator characteristic variations with process parameters are done using this simulation setup. Parameters under study are, for instance, X and Y direction lithography shifts, screening oxide and slab thicknesses. A robust process and design approach leading to a pn junction Mach Zehnder modulator insensitive to lithography misalignment is then proposed. Simulation results are compared with experimental datas. Indeed, various modulators have been fabricated with different process conditions and integration schemes. Extensive electro-optic characterization of these components will be presented.
PAM4 silicon photonic microring resonator-based transceiver circuits
NASA Astrophysics Data System (ADS)
Palermo, Samuel; Yu, Kunzhi; Roshan-Zamir, Ashkan; Wang, Binhao; Li, Cheng; Seyedi, M. Ashkan; Fiorentino, Marco; Beausoleil, Raymond
2017-02-01
Increased data rates have motivated the investigation of advanced modulation schemes, such as four-level pulseamplitude modulation (PAM4), in optical interconnect systems in order to enable longer transmission distances and operation with reduced circuit bandwidth relative to non-return-to-zero (NRZ) modulation. Employing this modulation scheme in interconnect architectures based on high-Q silicon photonic microring resonator devices, which occupy small area and allow for inherent wavelength-division multiplexing (WDM), offers a promising solution to address the dramatic increase in datacenter and high-performance computing system I/O bandwidth demands. Two ring modulator device structures are proposed for PAM4 modulation, including a single phase shifter segment device driven with a multi-level PAM4 transmitter and a two-segment device driven by two simple NRZ (MSB/LSB) transmitters. Transmitter circuits which utilize segmented pulsed-cascode high swing output stages are presented for both device structures. Output stage segmentation is utilized in the single-segment device design for PAM4 voltage level control, while in the two-segment design it is used for both independent MSB/LSB voltage levels and impedance control for output eye skew compensation. The 65nm CMOS transmitters supply a 4.4Vppd output swing for 40Gb/s operation when driving depletion-mode microring modulators implemented in a 130nm SOI process, with the single- and two-segment designs achieving 3.04 and 4.38mW/Gb/s, respectively. A PAM4 optical receiver front-end is also described which employs a large input-stage feedback resistor transimpedance amplifier (TIA) cascaded with an adaptively-tuned continuous-time linear equalizer (CTLE) for improved sensitivity. Receiver linearity, critical in PAM4 systems, is achieved with a peak-detector-based automatic gain control (AGC) loop.
78 FR 45513 - Privacy Act of 1974; Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-29
...; Computer Matching Program AGENCY: Defense Manpower Data Center (DMDC), DoD. ACTION: Notice of a Computer... individual's privacy, and would result in additional delay in determining eligibility and, if applicable, the... Defense. NOTICE OF A COMPUTER MATCHING PROGRAM AMONG THE DEFENSE MANPOWER DATA CENTER, THE DEPARTMENT OF...
20. SITE BUILDING 002 SCANNER BUILDING IN COMPUTER ...
20. SITE BUILDING 002 - SCANNER BUILDING - IN COMPUTER ROOM LOOKING AT "CONSOLIDATED MAINTENANCE OPERATIONS CENTER" JOB AREA AND OPERATION WORK CENTER. TASKS INCLUDE RADAR MAINTENANCE, COMPUTER MAINTENANCE, CYBER COMPUTER MAINTENANCE AND RELATED ACTIVITIES. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA
Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A
2016-01-01
The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.
Energy 101: Energy Efficient Data Centers
None
2018-04-16
Data centers provide mission-critical computing functions vital to the daily operation of top U.S. economic, scientific, and technological organizations. These data centers consume large amounts of energy to run and maintain their computer systems, servers, and associated high-performance componentsâup to 3% of all U.S. electricity powers data centers. And as more information comes online, data centers will consume even more energy. Data centers can become more energy efficient by incorporating features like power-saving "stand-by" modes, energy monitoring software, and efficient cooling systems instead of energy-intensive air conditioners. These and other efficiency improvements to data centers can produce significant energy savings, reduce the load on the electric grid, and help protect the nation by increasing the reliability of critical computer operations.
Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center
NASA Astrophysics Data System (ADS)
Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.
2012-12-01
Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.
An investigation of the effects of touchpad location within a notebook computer.
Kelaher, D; Nay, T; Lawrence, B; Lamar, S; Sommerich, C M
2001-02-01
This study evaluated effects of the location of a notebook computer's integrated touchpad, complimenting previous work in the area of desktop mouse location effects. Most often integrated touchpads are located in the computer's wrist rest, and centered on the keyboard. This study characterized effects of this bottom center location and four alternatives (top center, top right, right side, and bottom right) upon upper extremity posture, discomfort, preference, and performance. Touchpad location was found to significantly impact each of those measures. The top center location was particularly poor, in that it elicited more ulnar deviation, more shoulder flexion, more discomfort, and perceptions of performance impedance. In general, the bottom center, bottom right, and right side locations fared better, though subjects' wrists were more extended in the bottom locations. Suggestions for notebook computer design are provided.
FY 72 Computer Utilization at the Transportation Systems Center
DOT National Transportation Integrated Search
1972-08-01
The Transportation Systems Center currently employs a medley of on-site and off-site computer systems to obtain the computational support it requires. Examination of the monthly User Accountability Reports for FY72 indicated that during the fiscal ye...
-275-4303 Kevin Regimbal oversees NREL's High Performance Computing (HPC) Systems & Operations , engineering, and operations. Kevin is interested in data center design and computing as well as data center integration and optimization. Professional Experience HPC oversight: program manager, project manager, center
The role of dedicated data computing centers in the age of cloud computing
NASA Astrophysics Data System (ADS)
Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr
2017-10-01
Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.
CAROLINA CENTER FOR COMPUTATIONAL TOXICOLOGY
The Center will advance the field of computational toxicology through the development of new methods and tools, as well as through collaborative efforts. In each Project, new computer-based models will be developed and published that represent the state-of-the-art. The tools p...
Adaptation of a Control Center Development Environment for Industrial Process Control
NASA Technical Reports Server (NTRS)
Killough, Ronnie L.; Malik, James M.
1994-01-01
In the control center, raw telemetry data is received for storage, display, and analysis. This raw data must be combined and manipulated in various ways by mathematical computations to facilitate analysis, provide diversified fault detection mechanisms, and enhance display readability. A development tool called the Graphical Computation Builder (GCB) has been implemented which provides flight controllers with the capability to implement computations for use in the control center. The GCB provides a language that contains both general programming constructs and language elements specifically tailored for the control center environment. The GCB concept allows staff who are not skilled in computer programming to author and maintain computer programs. The GCB user is isolated from the details of external subsystem interfaces and has access to high-level functions such as matrix operators, trigonometric functions, and unit conversion macros. The GCB provides a high level of feedback during computation development that improves upon the often cryptic errors produced by computer language compilers. An equivalent need can be identified in the industrial data acquisition and process control domain: that of an integrated graphical development tool tailored to the application to hide the operating system, computer language, and data acquisition interface details. The GCB features a modular design which makes it suitable for technology transfer without significant rework. Control center-specific language elements can be replaced by elements specific to industrial process control.
Cloudbursting - Solving the 3-body problem
NASA Astrophysics Data System (ADS)
Chang, G.; Heistand, S.; Vakhnin, A.; Huang, T.; Zimdars, P.; Hua, H.; Hood, R.; Koenig, J.; Mehrotra, P.; Little, M. M.; Law, E.
2014-12-01
Many science projects in the future will be accomplished through collaboration among 2 or more NASA centers along with, potentially, external scientists. Science teams will be composed of more geographically dispersed individuals and groups. However, the current computing environment does not make this easy and seamless. By being able to share computing resources among members of a multi-center team working on a science/ engineering project, limited pre-competition funds could be more efficiently applied and technical work could be conducted more effectively with less time spent moving data or waiting for computing resources to free up. Based on the work from an NASA CIO IT Labs task, this presentation will highlight our prototype work in identifying the feasibility and identify the obstacles, both technical and management, to perform "Cloudbursting" among private clouds located at three different centers. We will demonstrate the use of private cloud computing infrastructure at the Jet Propulsion Laboratory, Langley Research Center, and Ames Research Center to provide elastic computation to each other to perform parallel Earth Science data imaging. We leverage elastic load balancing and auto-scaling features at each data center so that each location can independently define how many resources to allocate to a particular job that was "bursted" from another data center and demonstrate that compute capacity scales up and down with the job. We will also discuss future work in the area, which could include the use of cloud infrastructure from different cloud framework providers as well as other cloud service providers.
Computers and Media Centers--A Winning Combination.
ERIC Educational Resources Information Center
Graf, Nancy
1984-01-01
Profile of the computer program offered by the library/media center at Chief Joseph Junior High School in Richland, Washington, highlights program background, operator's licensing procedure, the trainer license, assistance from high school students, need for more computers, handling of software, and helpful hints. (EJS)
For operation of the Computer Software Management and Information Center (COSMIC)
NASA Technical Reports Server (NTRS)
Carmon, J. L.
1983-01-01
During the month of June, the Survey Research Center (SRC) at the University of Georgia designed new benefits questionnaires for computer software management and information center (COSMIC). As a test of their utility, these questionnaires are now used in the benefits identification process.
Reinventing patient-centered computing for the twenty-first century.
Goldberg, H S; Morales, A; Gottlieb, L; Meador, L; Safran, C
2001-01-01
Despite evidence over the past decade that patients like and will use patient-centered computing systems in managing their health, patients have remained forgotten stakeholders in advances in clinical computing systems. We present a framework for patient empowerment and the technical realization of that framework in an architecture called CareLink. In an evaluation of the initial deployment of CareLink in the support of neonatal intensive care, we have demonstrated a reduction in the length of stay for very-low birthweight infants, and an improvement in family satisfaction with care delivery. With the ubiquitous adoption of the Internet into the general culture, patient-centered computing provides the opportunity to mend broken health care relationships and reconnect patients to the care delivery process. CareLink itself provides functionality to support both clinical care and research, and provides a living laboratory for the further study of patient-centered computing.
Joint the Center for Applied Scientific Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd; Bremer, Timo; Van Essen, Brian
The Center for Applied Scientific Computing serves as Livermore Lab’s window to the broader computer science, computational physics, applied mathematics, and data science research communities. In collaboration with academic, industrial, and other government laboratory partners, we conduct world-class scientific research and development on problems critical to national security. CASC applies the power of high-performance computing and the efficiency of modern computational methods to the realms of stockpile stewardship, cyber and energy security, and knowledge discovery for intelligence applications.
Advanced Biomedical Computing Center (ABCC) | DSITP
The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.
Researchers at EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The goal of this researc...
Community Information Centers and the Computer.
ERIC Educational Resources Information Center
Carroll, John M.; Tague, Jean M.
Two computer data bases have been developed by the Computer Science Department at the University of Western Ontario for "Information London," the local community information center. One system, called LONDON, permits Boolean searches of a file of 5,000 records describing human service agencies in the London area. The second system,…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-21
... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2013-0059] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare & Medicaid Services (CMS))--Match Number 1076 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-14
... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2011-0022] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare & Medicaid Services (CMS))--Match Number 1076 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching...
Cornell University Center for Advanced Computing
Resource Center Data Management (RDMSG) Computational Agriculture National Science Foundation Other Public agriculture technology acquired Lifka joins National Science Foundation CISE Advisory Committee © Cornell
Digital optical computers at the optoelectronic computing systems center
NASA Technical Reports Server (NTRS)
Jordan, Harry F.
1991-01-01
The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.
Final Report. Center for Scalable Application Development Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellor-Crummey, John
2014-10-26
The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codesmore » for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.« less
Intention and Usage of Computer Based Information Systems in Primary Health Centers
ERIC Educational Resources Information Center
Hosizah; Kuntoro; Basuki N., Hari
2016-01-01
The computer-based information system (CBIS) is adopted by almost all of in health care setting, including the primary health center in East Java Province Indonesia. Some of softwares available were SIMPUS, SIMPUSTRONIK, SIKDA Generik, e-puskesmas. Unfortunately they were most of the primary health center did not successfully implemented. This…
NASA Technical Reports Server (NTRS)
McGalliard, James
2008-01-01
A viewgraph describing the use of multiple frameworks by NASA, GSA, and U.S. Government agencies is presented. The contents include: 1) Federal Systems Integration and Management Center (FEDSIM) and NASA Center for Computational Sciences (NCCS) Environment; 2) Ruling Frameworks; 3) Implications; and 4) Reconciling Multiple Frameworks.
Roy Fraley Roy Fraley Professional II-Engineer Roy.Fraley@nrel.gov | 303-384-6468 Roy Fraley is the high-performance computing (HPC) data center engineer with the Computational Science Center's HPC
NASA Technical Reports Server (NTRS)
Pankratz, Chris; Beland, Stephane; Craft, James; Baltzer, Thomas; Wilson, Anne; Lindholm, Doug; Snow, Martin; Woods, Thomas; Woodraska, Don
2018-01-01
The Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado in Boulder, USA operates the Solar Radiation and Climate Experiment (SORCE) NASA mission, as well as several other NASA spacecraft and instruments. Dozens of Solar Irradiance data sets are produced, managed, and disseminated to the science community. Data are made freely available to the scientific immediately after they are produced using a variety of data access interfaces, including the LASP Interactive Solar Irradiance Datacenter (LISIRD), which provides centralized access to a variety of solar irradiance data sets using both interactive and scriptable/programmatic methods. This poster highlights the key technological elements used for the NASA SORCE mission ground system to produce, manage, and disseminate data to the scientific community and facilitate long-term data stewardship. The poster presentation will convey designs, technological elements, practices and procedures, and software management processes used for SORCE and their relationship to data quality and data management standards, interoperability, NASA data policy, and community expectations.
NASA Astrophysics Data System (ADS)
Gatto, A.; Parolari, P.; Boffi, P.
2018-05-01
Frequency division multiplexing (FDM) is attractive to achieve high capacities in multiple access networks characterized by direct modulation and direct detection. In this paper we take into account point-to-point intra- and inter-datacenter connections to understand the performance of FDM operation compared with the ones achievable with standard multiple carrier modulation approach based on discrete multitone (DMT). DMT and FDM allow to match the non-uniform and bandwidth-limited response of the system under test, associated with the employment of low-cost directly-modulated sources, such as VCSELs with high-frequency chirp, and with fibre-propagation in presence of chromatic dispersion. While for very short distances typical of intra-datacentre communications, the huge number of DMT subcarriers permits to increase the transported capacity with respect to the FDM employment, in case of few tens-km reaches typical of inter-datacentre connections, the capabilities of FDM are more evident, providing system performance similar to the case of DMT application.
Jaschob, Daniel; Riffle, Michael
2012-07-30
Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.
Computers in Schools of Southeast Texas in 1997.
ERIC Educational Resources Information Center
Henderson, David L.; Renfrow, Raylene
This study examined computer use in southeast Texas schools in 1997. The study population included 110 school districts in Education Service Center Regions IV and VI. These centers serve 22 counties of southeast Texas in the Houston area. Using questionnaires, researchers collected data on brands of computers presently in use, percent of computer…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-06
... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2012-0015] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare and Medicaid Services (CMS))--Match Number 1094 AGENCY: Social Security Administration (SSA). ACTION: Notice of a new computer matching program that will expire...
The National Special Education Alliance: One Year Later.
ERIC Educational Resources Information Center
Green, Peter
1988-01-01
The National Special Education Alliance (a national network of local computer resource centers associated with Apple Computer, Inc.) consists, one year after formation, of 24 non-profit support centers staffed largely by volunteers. The NSEA now reaches more than 1000 disabled computer users each month and more growth in the future is expected.…
Researchers at EPA’s National Center for Computational Toxicology (NCCT) integrate advances in biology, chemistry, exposure and computer science to help prioritize chemicals for further research based on potential human health risks. The goal of this research is to quickly evalua...
Books, Bytes, and Bridges: Libraries and Computer Centers in Academic Institutions.
ERIC Educational Resources Information Center
Hardesty, Larry, Ed.
This book about the relationship between computer centers and libraries at academic institutions contains the following chapters: (1) "A History of the Rhetoric and Reality of Library and Computing Relationships" (Peggy Seiden and Michael D. Kathman); (2) "An Issue in Search of a Metaphor: Readings on the Marriageability of…
Computational structures technology and UVA Center for CST
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1992-01-01
Rapid advances in computer hardware have had a profound effect on various engineering and mechanics disciplines, including the materials, structures, and dynamics disciplines. A new technology, computational structures technology (CST), has recently emerged as an insightful blend between material modeling, structural and dynamic analysis and synthesis on the one hand, and other disciplines such as computer science, numerical analysis, and approximation theory, on the other hand. CST is an outgrowth of finite element methods developed over the last three decades. The focus of this presentation is on some aspects of CST which can impact future airframes and propulsion systems, as well as on the newly established University of Virginia (UVA) Center for CST. The background and goals for CST are described along with the motivations for developing CST, and a brief discussion is made on computational material modeling. We look at the future in terms of technical needs, computing environment, and research directions. The newly established UVA Center for CST is described. One of the research projects of the Center is described, and a brief summary of the presentation is given.
Computer programs: Operational and mathematical, a compilation
NASA Technical Reports Server (NTRS)
1973-01-01
Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.
For operation of the Computer Software Management and Information Center (COSMIC)
NASA Technical Reports Server (NTRS)
Carmon, J. L.
1983-01-01
Progress report on current status of computer software management and information center (COSMIC) includes the following areas: inventory, evaluation and publication, marketing, customer service, maintenance and support, and budget summary.
Center for Advanced Computational Technology
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
2000-01-01
The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.
CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.
ERIC Educational Resources Information Center
Skowronski, Steven D.; Tatum, Kenneth
This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…
Nicholson, Anita; Tobin, Mary
2006-01-01
This presentation will discuss coupling commercial and customized computer-supported teaching aids to provide BSN nursing students with a friendly customer-centered self-study approach to psychomotor skill acquisition.
Computational mechanics and physics at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
South, Jerry C., Jr.
1987-01-01
An overview is given of computational mechanics and physics at NASA Langley Research Center. Computational analysis is a major component and tool in many of Langley's diverse research disciplines, as well as in the interdisciplinary research. Examples are given for algorithm development and advanced applications in aerodynamics, transition to turbulence and turbulence simulation, hypersonics, structures, and interdisciplinary optimization.
Center for computation and visualization of geometric structures. Final report, 1992 - 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-11-01
This report describes the overall goals and the accomplishments of the Geometry Center of the University of Minnesota, whose mission is to develop, support, and promote computational tools for visualizing geometric structures, for facilitating communication among mathematical and computer scientists and between these scientists and the public at large, and for stimulating research in geometry.
76 FR 56744 - Privacy Act of 1974; Notice of a Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-14
...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD... (SSA) and DoD Defense Manpower Data Center (DMDC) that their records are being matched by computer. The... intrusion of the individual's privacy and would result in additional delay in the eventual SSI payment and...
Applied technology center business plan and market survey
NASA Technical Reports Server (NTRS)
Hodgin, Robert F.; Marchesini, Roberto
1990-01-01
Business plan and market survey for the Applied Technology Center (ATC), computer technology transfer and development non-profit corporation, is presented. The mission of the ATC is to stimulate innovation in state-of-the-art and leading edge computer based technology. The ATC encourages the practical utilization of late-breaking computer technologies by firms of all variety.
A Queue Simulation Tool for a High Performance Scientific Computing Center
NASA Technical Reports Server (NTRS)
Spear, Carrie; McGalliard, James
2007-01-01
The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.
ERIC Educational Resources Information Center
Fox, Annie
1978-01-01
Relates some experiences at this nonprofit center, which was designed so that interested members of the general public can walk in and learn about computers in a safe, nonintimidating environment. STARWARS HODGE, a game written in PILOT, is also described. (CMV)
75 FR 65639 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-26
...: Computational Biology Special Emphasis Panel A. Date: October 29, 2010. Time: 2 p.m. to 3:30 p.m. Agenda: To.... Name of Committee: Center for Scientific Review Special Emphasis Panel; Member Conflict: Computational...
ERIC Educational Resources Information Center
Cottrell, William B.; And Others
The Nuclear Safety Information Center (NSIC) is a highly sophisticated scientific information center operated at Oak Ridge National Laboratory (ORNL) for the U.S. Atomic Energy Commission. Its information file, which consists of both data and bibliographic information, is computer stored and numerous programs have been developed to facilitate the…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-15
... with the Department of Defense (DoD), Defense Manpower Data Center (DMDC). We have provided background... & Medicaid Services and the Department of Defense, Defense Manpower Data Center for the Determination of...), Centers for Medicare & Medicaid Services (CMS), and Department of Defense (DoD), Defense Manpower Data...
2012-01-01
Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423
Polymer enabled 100 Gbaud connectivity for datacom applications
NASA Astrophysics Data System (ADS)
Katopodis, V.; Groumas, P.; Zhang, Z.; Dinu, R.; Miller, E.; Konczykowska, A.; Dupuy, J.-Y.; Beretta, A.; Dede, A.; Choi, J. H.; Harati, P.; Jorge, F.; Nodjiadjim, V.; Riet, Muriel; Cangini, G.; Vannucci, A.; Keil, N.; Bach, H.-G.; Grote, N.; Avramopoulos, H.; Kouloumentas, Ch.
2016-03-01
Polymers hold the promise for ultra-fast modulation of optical signals due to their potential for ultra-fast electro-optic (EO) response and high EO coefficient. In this work, we present the basic structure and properties of an efficient EO material system, and we summarize the efforts made within the project ICT-POLYSYS for the development of high-speed transmitters based on this system. More specifically, we describe successful efforts for the monolithic integration of multi-mode interference (MMI) couplers and Bragg-gratings (BGs) along with Mach-Zehnder modulators (MZMs) on this platform, and for the hybrid integration of InP active elements in the form of laser diodes (LDs) and gain chips (GCs). Using these integration techniques and the combination of the hybrid optical chips with ultra-fast indium phosphide double heterojunction bipolar transistor (InP-DHBT) electronics, we develop and fully package a single 100 Gb/s transmitter and a 2×100 Gb/s transmitter that can support serial operation at this rate with conventional non-return-to-zero on-off-keying (NRZ-OOK) modulation format. We also present the experimental evaluation of the devices, validating the efficiency of the monolithic and hybrid integration concepts and confirming the potential of this technology for single-lane 100 Gb/s optical connectivity in data-center network environments. Results from transmission experiments to this end include the achievement of BER close to 6·10-9 in B2B configuration, the achievement of BER lower than 10-7 for propagation over standard single-mode fiber (SSMF) with total length up to 1000 m, and the achievement of BER at the level of 10-5 after 1625 m of SSMF. Finally, plans for the use of the EO polymer system in a more complex hybrid integration platform for high-flexibility/high-capacity transmitters are also outlined.
A Datacenter Backstage: The Knowledge that Supports the Brazilian Seismic Network
NASA Astrophysics Data System (ADS)
Calhau, J.; Assumpcao, M.; Collaço, B.; Bianchi, M.; Pirchiner, M.
2015-12-01
Historically, Brazilian seismology never had a clear strategic vision about how its data should be acquired, evaluated, stored and shared. Without a data management plan, data (for any practical purpose) could be lost, resulting in a non-uniform coverage that will reduce any chance of local and international collaboration, i.e., data will never become scientific knowledge. Since 2009, huge efforts from four different institutions are establishing the new permanent Brazilian Seismographic Network (RSBR), mainly with resources from PETROBRAS, the Brazilian Government oil company. Four FDSN sub-networks currently compose RSBR, with a total of 80 permanent stations. BL and BR codes (from BRASIS subnet) with 47 stations maintained by University of Sao Paulo (USP) and University of Brasilia (UnB) respectively; NB code (RSISNE subnet), with 16 stations deployed by University of Rio Grande do Norte (UFRN); and ON code (RSIS subnet), with 18 stations operated by the National Observatory (ON) in Rio de Janeiro. Most stations transmit data in real-time via satellite or cell-phone links. Each node acquires its own stations locally, and data is real-time shared using SeedLink. Archived data is distributed via ArcLink and/or FDSNWS services. All nodes use the SeisComP3 system for real-time processing and as a levering back-end. Open-source solutions like Seiscomp3 require some homemade tools to be developed, to help solve the most common daily problems of a data management center: local magnitude into the real-time earthquake processor, website plugins, regional earthquake catalog, contribution with ISC catalog, quality-control tools, data request tools, etc. The main data products and community activities include: kml files, data availability plots, request charts, summer school courses, an Open Lab Day and news interviews. Finally, a good effort was made to establish BRASIS sub-network and the whole RSBR as a unified project, that serves as a communication channel between individuals operating local networks.
A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning
NASA Astrophysics Data System (ADS)
Basdekas, L.; Stewart, N.; Triana, E.
2013-12-01
Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU evaluate tradeoffs in a continually changing world.
Center for Center for Technology for Advanced Scientific Component Software (TASCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostadin, Damevski
A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less
Use of computers and Internet among people with severe mental illnesses at peer support centers.
Brunette, Mary F; Aschbrenner, Kelly A; Ferron, Joelle C; Ustinich, Lee; Kelly, Michael; Grinley, Thomas
2017-12-01
Peer support centers are an ideal setting where people with severe mental illnesses can access the Internet via computers for online health education, peer support, and behavioral treatments. The purpose of this study was to assess computer use and Internet access in peer support agencies. A peer-assisted survey assessed the frequency with which consumers in all 13 New Hampshire peer support centers (n = 702) used computers to access Internet resources. During the 30-day survey period, 200 of the 702 peer support consumers (28%) responded to the survey. More than 3 quarters (78.5%) of respondents had gone online to seek information in the past year. About half (49%) of respondents were interested in learning about online forums that would provide information and peer support for mental health issues. Peer support centers may be a useful venue for Web-based approaches to education, peer support, and intervention. Future research should assess facilitators and barriers to use of Web-based resources among people with severe mental illness in peer support centers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Annual Report of the Metals and Ceramics Information Center, 1 May 1979-30 April 1980.
1980-07-01
MANAGEMENT AND ECONOMIC ANALYSIS DEPT. * Computer and Information SyslemsiD. C Operations 1 Battelle Technical Inputs to Planning * Computer Systems 0...Biomass Resources * Education 0 Business Planning * Information Systems * Economics , Planning and Policy Analysis * Statistical and Mathematical Modelrng...Metals and Ceramics Information Center (MCIC) is one of several technical information analysis centers (IAC’s) chartered and sponsored by the
NASA Technical Reports Server (NTRS)
Jones, H. W.
1984-01-01
The computer-assisted C-matrix, Loewdin-alpha-function, single-center expansion method in spherical harmonics has been applied to the three-center nuclear-attraction integral (potential due to the product of separated Slater-type orbitals). Exact formulas are produced for 13 terms of an infinite series that permits evaluation to ten decimal digits of an example using 1s orbitals.
DOT National Transportation Integrated Search
2003-10-01
The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : WSDOT deployment". This document defines the objective, approach,...
DOT National Transportation Integrated Search
2006-05-01
This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch - Traffic Management Center Integration Field Operations Test in the State of Washington. The document discusses evaluation findings in the foll...
Argonne Research Library | Argonne National Laboratory
Publications Researchers Postdocs Exascale Computing Institute for Molecular Engineering at Argonne Work with Scientific Publications Researchers Postdocs Exascale Computing Institute for Molecular Engineering at IMEInstitute for Molecular Engineering JCESRJoint Center for Energy Storage Research MCSGMidwest Center for
NASA Astrophysics Data System (ADS)
Gultom, Syamsul; Darma Sitepu, Indra; Hasibuan, Nurman
2018-03-01
Fatigue due to long and continuous computer usage can lead to problems of dominant fatigue associated with decreased performance and work motivation. Specific targets in the first phase have been achieved in this research such as: (1) Identified complaints on workers using computers, using the Bourdon Wiersma test kit. (2) Finding the right relaxation & work posture draft for a solution to reduce muscle fatigue in computer-based workers. The type of research used in this study is research and development method which aims to produce the products or refine existing products. The final product is a prototype of back-holder, monitoring filter and arranging a relaxation exercise as well as the manual book how to do this while in front of the computer to lower the fatigue level for computer users in Unimed’s Administration Center. In the first phase, observations and interviews have been conducted and identified the level of fatigue on the employees of computer users at Uniemd’s Administration Center using Bourdon Wiersma test and has obtained the following results: (1) The average velocity time of respondents in BAUK, BAAK and BAPSI after working with the value of interpretation of the speed obtained value of 8.4, WS 13 was in a good enough category, (2) The average of accuracy of respondents in BAUK, in BAAK and in BAPSI after working with interpretation value accuracy obtained Value of 5.5, WS 8 was in doubt-category. This result shows that computer users experienced a significant tiredness at the Unimed Administration Center, (3) the consistency of the average of the result in measuring tiredness level on computer users in Unimed’s Administration Center after working with values in consistency of interpretation obtained Value of 5.5 with WS 8 was put in a doubt-category, which means computer user in The Unimed Administration Center suffered an extreme fatigue. In phase II, based on the results of the first phase in this research, the researcher offers solutions such as the prototype of Back-Holder, monitoring filter, and design a proper relaxation exercise to reduce the fatigue level. Furthermore, in in order to maximize the exercise itself, a manual book will be given to employees whom regularly work in front of computers at Unimed’s Administration Center
Computer use in primary care and patient-physician communication.
Sobral, Dilermando; Rosenbaum, Marcy; Figueiredo-Braga, Margarida
2015-07-08
This study evaluated how physicians and patients perceive the impact of computer use on clinical communication, and how a patient-centered orientation can influence this impact. The study followed a descriptive cross-sectional design and included 106 family physicians and 392 patients. An original questionnaire assessed computer use, participants' perspective of its impact, and patient centered strategies. Physicians reported spending 42% of consultation time in contact with the computer. A negative impact of computer in patient-physician communication regarding the consultation length, confidentiality, maintaining eye contact, active listening to the patient, and ability to understand the patient was reported by physicians, while patients reported a positive effect for all the items. Physicians considered that the usual computer placement in their consultation room was significantly unfavorable to patient-physician communication. Physicians perceive the impact of computer use on patient-physician communication as negative, while patients have a positive perception of computer use on patient-physician communication. Consultation support can represent a challenge to physicians who recognize its negative impact in patient centered orientation. Medical education programs aiming to enhance specific communication skills and to better integrate computer use in primary care settings are needed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
GSDC: A Unique Data Center in Korea for HEP research
NASA Astrophysics Data System (ADS)
Ahn, Sang-Un
2017-04-01
Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) is a unique data center in South Korea established for promoting the fundamental research fields by supporting them with the expertise on Information and Communication Technology (ICT) and the infrastructure for High Performance Computing (HPC), High Throughput Computing (HTC) and Networking. GSDC has supported various research fields in South Korea dealing with the large scale of data, e.g. RENO experiment for neutrino research, LIGO experiment for gravitational wave detection, Genome sequencing project for bio-medical, and HEP experiments such as CDF at FNAL, Belle at KEK, and STAR at BNL. In particular, GSDC has run a Tier-1 center for ALICE experiment using the LHC at CERN since 2013. In this talk, we present the overview on computing infrastructure that GSDC runs for the research fields and we discuss on the data center infrastructure management system deployed at GSDC.
High performance computing for advanced modeling and simulation of materials
NASA Astrophysics Data System (ADS)
Wang, Jue; Gao, Fei; Vazquez-Poletti, Jose Luis; Li, Jianjiang
2017-02-01
The First International Workshop on High Performance Computing for Advanced Modeling and Simulation of Materials (HPCMS2015) was held in Austin, Texas, USA, Nov. 18, 2015. HPCMS 2015 was organized by Computer Network Information Center (Chinese Academy of Sciences), University of Michigan, Universidad Complutense de Madrid, University of Science and Technology Beijing, Pittsburgh Supercomputing Center, China Institute of Atomic Energy, and Ames Laboratory.
ERIC Educational Resources Information Center
Ba, Harouna; Tally, Bill; Tsikalas, Kallen
The EDC (Educational Development Center) Center for Children and Technology (CCT) and Computers for Youth (CFY) completed a 1-year comparative study of children's use of computers in low- and middle-income homes. The study explores the digital divide as a literacy issue, rather than merely a technical one. Digital literacy is defined as a set of…
DOT National Transportation Integrated Search
2006-07-01
This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch Traffic Management Center Integration Field Operations Test in the State of Utah. The document discusses evaluation findings in the followin...
Aviation Mechanic General, Airframe, and Powerplant Knowledge Test Guide
DOT National Transportation Integrated Search
1995-01-01
The FAA has available hundreds of computer testing centers nationwide. These testing centers offer the full range of airman knowledge tests. Refer to appendix 1 in this guide for a list of computer testing designees. This knowledge test guide was dev...
DOT National Transportation Integrated Search
2004-01-01
The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : state of Utah". This document defines the objective, approach, an...
75 FR 70899 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-19
... submit to the Office of Management and Budget (OMB) for clearance the following proposal for collection... Annual Burden Hours: 2,952. Public Computer Center Reports (Quarterly and Annually) Number of Respondents... specific to Infrastructure and Comprehensive Community Infrastructure, Public Computer Center, and...
PCs: Key to the Future. Business Center Provides Sound Skills and Good Attitudes.
ERIC Educational Resources Information Center
Pay, Renee W.
1991-01-01
The Advanced Computing/Management Training Program at Jordan Technical Center (Sandy, Utah) simulates an automated office to teach five sets of skills: computer architecture and operating systems, word processing, data processing, communications skills, and management principles. (SK)
NASA Technical Reports Server (NTRS)
Gillian, Ronnie E.; Lotts, Christine G.
1988-01-01
The Computational Structural Mechanics (CSM) Activity at Langley Research Center is developing methods for structural analysis on modern computers. To facilitate that research effort, an applications development environment has been constructed to insulate the researcher from the many computer operating systems of a widely distributed computer network. The CSM Testbed development system was ported to the Numerical Aerodynamic Simulator (NAS) Cray-2, at the Ames Research Center, to provide a high end computational capability. This paper describes the implementation experiences, the resulting capability, and the future directions for the Testbed on supercomputers.
How the Theory of Computing Can Help in Space Exploration
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Longpre, Luc
1997-01-01
The opening of the NASA Pan American Center for Environmental and Earth Sciences (PACES) at the University of Texas at El Paso made it possible to organize the student Center for Theoretical Research and its Applications in Computer Science (TRACS). In this abstract, we briefly describe the main NASA-related research directions of the TRACS center, and give an overview of the preliminary results of student research.
ERIC Educational Resources Information Center
Osman, Abdulaziz
2016-01-01
The purpose of this research study was to examine the unknown fears of embracing cloud computing which stretches across measurements like fear of change from leaders and the complexity of the technology in 9-1-1 dispatch centers in USA. The problem that was addressed in the study was that many 9-1-1 dispatch centers in USA are still using old…
Bernstam, Elmer V.; Hersh, William R.; Johnson, Stephen B.; Chute, Christopher G.; Nguyen, Hien; Sim, Ida; Nahm, Meredith; Weiner, Mark; Miller, Perry; DiLaura, Robert P.; Overcash, Marc; Lehmann, Harold P.; Eichmann, David; Athey, Brian D.; Scheuermann, Richard H.; Anderson, Nick; Starren, Justin B.; Harris, Paul A.; Smith, Jack W.; Barbour, Ed; Silverstein, Jonathan C.; Krusch, David A.; Nagarajan, Rakesh; Becich, Michael J.
2010-01-01
Clinical and translational research increasingly requires computation. Projects may involve multiple computationally-oriented groups including information technology (IT) professionals, computer scientists and biomedical informaticians. However, many biomedical researchers are not aware of the distinctions among these complementary groups, leading to confusion, delays and sub-optimal results. Although written from the perspective of clinical and translational science award (CTSA) programs within academic medical centers, the paper addresses issues that extend beyond clinical and translational research. The authors describe the complementary but distinct roles of operational IT, research IT, computer science and biomedical informatics using a clinical data warehouse as a running example. In general, IT professionals focus on technology. The authors distinguish between two types of IT groups within academic medical centers: central or administrative IT (supporting the administrative computing needs of large organizations) and research IT (supporting the computing needs of researchers). Computer scientists focus on general issues of computation such as designing faster computers or more efficient algorithms, rather than specific applications. In contrast, informaticians are concerned with data, information and knowledge. Biomedical informaticians draw on a variety of tools, including but not limited to computers, to solve information problems in health care and biomedicine. The paper concludes with recommendations regarding administrative structures that can help to maximize the benefit of computation to biomedical research within academic health centers. PMID:19550198
Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand
ERIC Educational Resources Information Center
Jayakar, Krishna; Park, Eun-A
2012-01-01
The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…
Transportation Research and Analysis Computing Center (TRACC) Year 6 Quarter 4 Progress Report
DOT National Transportation Integrated Search
2013-03-01
Argonne National Laboratory initiated a FY2006-FY2009 multi-year program with the US Department of Transportation (USDOT) on October 1, 2006, to establish the Transportation Research and Analysis Computing Center (TRACC). As part of the TRACC project...
Aerodynamic Characterization of a Modern Launch Vehicle
NASA Technical Reports Server (NTRS)
Hall, Robert M.; Holland, Scott D.; Blevins, John A.
2011-01-01
A modern launch vehicle is by necessity an extremely integrated design. The accurate characterization of its aerodynamic characteristics is essential to determine design loads, to design flight control laws, and to establish performance. The NASA Ares Aerodynamics Panel has been responsible for technical planning, execution, and vetting of the aerodynamic characterization of the Ares I vehicle. An aerodynamics team supporting the Panel consists of wind tunnel engineers, computational engineers, database engineers, and other analysts that address topics such as uncertainty quantification. The team resides at three NASA centers: Langley Research Center, Marshall Space Flight Center, and Ames Research Center. The Panel has developed strategies to synergistically combine both the wind tunnel efforts and the computational efforts with the goal of validating the computations. Selected examples highlight key flow physics and, where possible, the fidelity of the comparisons between wind tunnel results and the computations. Lessons learned summarize what has been gleaned during the project and can be useful for other vehicle development projects.
Computer Program for Steady Transonic Flow over Thin Airfoils by Finite Elements
1975-10-01
COMPUTER PROGRAM FOR STEADY JJ TRANSONIC FLOW OVER THIN AIRFOILS BY g FINITE ELEMENTS • *q^^ r ̂ c HUNTSVILLE RESEARCH & ENGINEERING CENTER...jglMMi B Jun’ INC ORGANIMTION NAME ANO ADDRESS Lö^kfteed Missiles & Space Company, Inc. Huntsville Research & Engineering Center,^ Huntsville, Alab...This report was prepared by personnel in the Computational Mechamcs Section of the Lockheed Missiles fc Space Company, Inc.. Huntsville Research
RESIF national datacentre : new features and forthcoming evolutions
NASA Astrophysics Data System (ADS)
Pequegnat, C.; Volcke, P.; le Tanou, J.; Wolyniec, D.; Lecointre, A.; Guéguen, P.
2013-12-01
RESIF is a nationwide french project aimed at building an high quality system to observe and understand the inner earth. The goal is to create a network throughout mainland France comprising 750 seismometers and geodetic measurement instruments, 250 of which will be mobile, to enable the observation network to be focussed on specific investigation subjects and geographic locations. The RESIF data distribution centre, which is a part of the global project, is operated by the Université Joseph Fourier (Grenoble, France) and is being implemented for two years. Data from french broadband permanent network, strong motion permanent network, and mobile seismological antenna are freely accessible as realtime streams and continuous validated data, along with instrumental metadata, delivered using widely known formats and requests tools. New features of the datacentre are : - new modern distribution tools : two FDSN WEBservices has been implemented and deliver data and metadata. - new data and datasets : the number of permanent stations rose by over 40 % percent in one year and the RESIF archive now includes past data (down to 1995) and data from new networks. Moreover, data from mobile experiments prior to 2011 is progressively released, and data from new mobile experiments in the Alps and in the Pyrenean mountains is progressively integrated. - new infrastructures : (1) the RESIF databank is about to be connected to the grid storage of the University High Performance Computing (HPC) centre. As a scientific use case of this datacenter facility, a focus is made on intensive exploitation of combined data from permanent and mobile networks (2) the RESIF databank will be progressively hosted on a new shared storage facility operated by the Université Joseph Fourier. This infrastructure offers high availability data storage (both in blocks and files modes) as well as backup and long term archival capabilities, and will be fully operational at the beginning of 2014.
Developing computer training programs for blood bankers.
Eisenbrey, L
1992-01-01
Two surveys were conducted in July 1991 to gather information about computer training currently performed within American Red Cross Blood Services Regions. One survey was completed by computer trainers from software developer-vendors and regional centers. The second survey was directed to the trainees, to determine their perception of the computer training. The surveys identified the major concepts, length of training, evaluations, and methods of instruction used. Strengths and weaknesses of training programs were highlighted by trainee respondents. Using the survey information and other sources, recommendations (including those concerning which computer skills and tasks should be covered) are made that can be used as guidelines for developing comprehensive computer training programs at any blood bank or blood center.
Removing the center from computing: biology's new mode of digital knowledge production.
November, Joseph
2011-06-01
This article shows how the USA's National Institutes of Health (NIH) helped to bring about a major shift in the way computers are used to produce knowledge and in the design of computers themselves as a consequence of its early 1960s efforts to introduce information technology to biologists. Starting in 1960 the NIH sought to reform the life sciences by encouraging researchers to make use of digital electronic computers, but despite generous federal support biologists generally did not embrace the new technology. Initially the blame fell on biologists' lack of appropriate (i.e. digital) data for computers to process. However, when the NIH consulted MIT computer architect Wesley Clark about this problem, he argued that the computer's quality as a device that was centralized posed an even greater challenge to potential biologist users than did the computer's need for digital data. Clark convinced the NIH that if the agency hoped to effectively computerize biology, it would need to satisfy biologists' experimental and institutional needs by providing them the means to use a computer without going to a computing center. With NIH support, Clark developed the 1963 Laboratory Instrument Computer (LINC), a small, real-time interactive computer intended to be used inside the laboratory and controlled entirely by its biologist users. Once built, the LINC provided a viable alternative to the 1960s norm of large computers housed in computing centers. As such, the LINC not only became popular among biologists, but also served in later decades as an important precursor of today's computing norm in the sciences and far beyond, the personal computer.
Autonomic Computing for Spacecraft Ground Systems
NASA Technical Reports Server (NTRS)
Li, Zhenping; Savkli, Cetin; Jones, Lori
2007-01-01
Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.
Computers as learning resources in the health sciences: impact and issues.
Ellis, L B; Hannigan, G G
1986-01-01
Starting with two computer terminals in 1972, the Health Sciences Learning Resources Center of the University of Minnesota Bio-Medical Library expanded its instructional facilities to ten terminals and thirty-five microcomputers by 1985. Computer use accounted for 28% of total center circulation. The impact of these resources on health sciences curricula is described and issues related to use, support, and planning are raised and discussed. Judged by their acceptance and educational value, computers are successful health sciences learning resources at the University of Minnesota. PMID:3518843
AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2
2011-01-01
area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could
SANs and Large Scale Data Migration at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Salmon, Ellen M.
2004-01-01
Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.
[Computer-aided prescribing: from utopia to reality].
Suárez-Varela Ubeda, J; Beltrán Calvo, C; Molina López, T; Navarro Marín, P
2005-05-31
To determine whether the introduction of computer-aided prescribing helped reduce the administrative burden at primary care centers. Descriptive, cross-sectional design. Torreblanca Health Center in the province of Seville, southern Spain. From 29 October 2003 to the present a pilot project involving nine pharmacies in the basic health zone served by this health center has been running to evaluate computer-aided prescribing (the Receta XXI project) with real patients. All patients on the center's list of patients who came to the center for an administrative consultation to renew prescriptions for medications or supplies for long-term treatment. Total number of administrative visits per patient for patients who came to the center to renew prescriptions for long-term treatment, as recorded by the Diraya system (Historia Clinica Digital del Ciudadano, or Citizen's Digital Medical Record) during the period from February to July 2004. Total number of the same type of administrative visits recorded by the previous system (TASS) during the period from February to July 2003. The mean number of administrative visits per month during the period from February to July 2003 was 160, compared to a mean number of 64 visits during the period from February to July 2004. The reduction in the number of visits for prescription renewal was 60%. Introducing a system for computer-aided prescribing significantly reduced the number of administrative visits for prescription renewal for long-term treatment. This could help reduce the administrative burden considerably in primary care if the system were used in all centers.
Computer systems and software engineering
NASA Technical Reports Server (NTRS)
Mckay, Charles W.
1988-01-01
The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.
Use of PL/1 in a Bibliographic Information Retrieval System.
ERIC Educational Resources Information Center
Schipma, Peter B.; And Others
The Information Sciences section of ITT Research Institute (IITRI) has developed a Computer Search Center and is currently conducting a research project to explore computer searching of a variety of machine-readable data bases. The Center provides Selective Dissemination of Information services to academic, industrial and research organizations…
1991-05-01
Marine Corps Tiaining Systems (CBESS) memorization training Inteligence Center, Dam Neck Threat memorization training Commander Tactical Wings, Atlantic...News Shipbuilding Technical training AEGIS Training Center, Dare Artificial Intelligence (Al) Tools Computerized firm-end analysis tools NETSCPAC...Technology Department and provides computational and electronic mail support for research in areas of artificial intelligence, computer-assisted instruction
Postdoctoral Fellow | Center for Cancer Research
The Neuro-Oncology Branch (NOB), Center for Cancer Research (CCR), National Cancer Institute (NCI) of the National Institutes of Health (NIH) is seeking outstanding postdoctoral candidates interested in studying metabolic and cell signaling pathways in the context of brain cancers through construction of computational models amenable to formal computational analysis and
Venus - Computer Simulated Global View Centered at 0 Degrees East Longitude
1996-03-14
This global view of the surface of Venus is centered at 0 degrees east longitude. NASA Magellan synthetic aperture radar mosaics from the first cycle of Magellan mapping were mapped onto a computer-simulated globe to create this image. http://photojournal.jpl.nasa.gov/catalog/PIA00257
Computer-Aided Corrosion Program Management
NASA Technical Reports Server (NTRS)
MacDowell, Louis
2010-01-01
This viewgraph presentation reviews Computer-Aided Corrosion Program Management at John F. Kennedy Space Center. The contents include: 1) Corrosion at the Kennedy Space Center (KSC); 2) Requirements and Objectives; 3) Program Description, Background and History; 4) Approach and Implementation; 5) Challenges; 6) Lessons Learned; 7) Successes and Benefits; and 8) Summary and Conclusions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shankar, Arjun
Computer scientist Arjun Shankar is director of the Compute and Data Environment for Science (CADES), ORNL’s multidisciplinary big data computing center. CADES offers computing, networking and data analytics to facilitate workflows for both ORNL and external research projects.
NASA Astrophysics Data System (ADS)
Tsuda, Kunikazu; Tano, Shunichi; Ichino, Junko
To lower power consumption has becomes a worldwide concern. It is also becoming a bigger area in Computer Systems, such as reflected by the growing use of software-as-a-service and cloud computing whose market has increased since 2000, at the same time, the number of data centers that accumulates and manages the computer has increased rapidly. Power consumption at data centers is accounts for a big share of the entire IT power usage, and is still rapidly increasing. This research focuses on the air-conditioning that occupies accounts for the biggest portion of electric power consumption by data centers, and proposes to develop a technique to lower the power consumption by applying the natural cool air and the snow for control temperature and humidity. We verify those effectiveness of this approach by the experiment. Furthermore, we also examine the extent to which energy reduction is possible when a data center is located in Hokkaido.
An overview of the DII-HEP OpenStack based CMS data analysis
NASA Astrophysics Data System (ADS)
Osmani, L.; Tarkoma, S.; Eerola, P.; Komu, M.; Kortelainen, M. J.; Kraemer, O.; Lindén, T.; Toor, S.; White, J.
2015-05-01
An OpenStack based private cloud with the Cluster File System has been built and used with both CMS analysis and Monte Carlo simulation jobs in the Datacenter Indirection Infrastructure for Secure High Energy Physics (DII-HEP) project. On the cloud we run the ARC middleware that allows running CMS applications without changes on the job submission side. Our test results indicate that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability. To manage the virtual machines (VM) dynamically in an elastic fasion, we are testing the EMI authorization service (Argus) and the Execution Environment Service (Argus-EES). An OpenStackplugin has been developed for Argus-EES. The Host Identity Protocol (HIP) has been designed for mobile networks and it provides a secure method for IP multihoming. HIP separates the end-point identifier and locator role for IP address which increases the network availability for the applications. Our solution leverages HIP for traffic management. This presentation gives an update on the status of the work and our lessons learned in creating an OpenStackbased cloud for HEP.
2006-08-01
preparing a COBRE Molecular Targets Project with a goal to extend the computational work of Specific Aims of this project to the discovery of novel...million Center of Biomedical Research Excellence ( COBRE ) grant from the National Center for Research Resources at the National Institutes of Health...three year COBRE -funded project in Molecular Targets. My recruitment to the University of Louisville’s Brown Cancer Center and my proposed COBRE
DoDs Efforts to Consolidate Data Centers Need Improvement
2016-03-29
Consolidation Initiative, February 26, 2010. 3 Green IT minimizes negative environmental impact of IT operations by ensuring that computers and computer-related...objectives for consolidating data centers. DoD’s objectives were to: • reduce cost; • reduce environmental impact ; • improve efficiency and service levels...number of DoD data centers. Finding A DODIG-2016-068 │ 7 information in DCIM, the DoD CIO did not confirm whether those changes would impact DoD’s
Salovey, Peter; Williams-Piehota, Pamela; Mowad, Linda; Moret, Marta Elisa; Edlund, Denielle; Andersen, Judith
2009-01-01
This article describes the establishment of two community technology centers affiliated with Head Start early childhood education programs focused especially on Latino and African American parents of children enrolled in Head Start. A 6-hour course concerned with computer and cancer literacy was presented to 120 parents and other community residents who earned a free, refurbished, Internet-ready computer after completing the program. Focus groups provided the basis for designing the structure and content of the course and modifying it during the project period. An outcomes-based assessment comparing program participants with 70 nonparticipants at baseline, immediately after the course ended, and 3 months later suggested that the program increased knowledge about computers and their use, knowledge about cancer and its prevention, and computer use including health information-seeking via the Internet. The creation of community computer technology centers requires the availability of secure space, capacity of a community partner to oversee project implementation, and resources of this partner to ensure sustainability beyond core funding.
The Center for Computational Biology: resources, achievements, and challenges
Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott
2011-01-01
The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains. PMID:22081221
The Center for Computational Biology: resources, achievements, and challenges.
Toga, Arthur W; Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott
2012-01-01
The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains.
Decomposition of algebraic sets and applications to weak centers of cubic systems
NASA Astrophysics Data System (ADS)
Chen, Xingwu; Zhang, Weinian
2009-10-01
There are many methods such as Gröbner basis, characteristic set and resultant, in computing an algebraic set of a system of multivariate polynomials. The common difficulties come from the complexity of computation, singularity of the corresponding matrices and some unnecessary factors in successive computation. In this paper, we decompose algebraic sets, stratum by stratum, into a union of constructible sets with Sylvester resultants, so as to simplify the procedure of elimination. Applying this decomposition to systems of multivariate polynomials resulted from period constants of reversible cubic differential systems which possess a quadratic isochronous center, we determine the order of weak centers and discuss the bifurcation of critical periods.
Development of Advanced Computational Aeroelasticity Tools at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Bartels, R. E.
2008-01-01
NASA Langley Research Center has continued to develop its long standing computational tools to address new challenges in aircraft and launch vehicle design. This paper discusses the application and development of those computational aeroelastic tools. Four topic areas will be discussed: 1) Modeling structural and flow field nonlinearities; 2) Integrated and modular approaches to nonlinear multidisciplinary analysis; 3) Simulating flight dynamics of flexible vehicles; and 4) Applications that support both aeronautics and space exploration.
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
Expanding HPC and Research Computing--The Sustainable Way
ERIC Educational Resources Information Center
Grush, Mary
2009-01-01
Increased demands for research and high-performance computing (HPC)--along with growing expectations for cost and environmental savings--are putting new strains on the campus data center. More and more, CIOs like the University of Notre Dame's (Indiana) Gordon Wishon are seeking creative ways to build more sustainable models for data center and…
School Data Processing Services in Texas. A Cooperative Approach. [Revised.
ERIC Educational Resources Information Center
Texas Education Agency, Austin. Management Information Center.
The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…
School Data Processing Services in Texas: A Cooperative Approach.
ERIC Educational Resources Information Center
Texas Education Agency, Austin.
The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…
School Data Processing Services in Texas: A Cooperative Approach.
ERIC Educational Resources Information Center
Texas Education Agency, Austin.
The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESO). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions each of the five…
Remote Science Operation Center research
NASA Technical Reports Server (NTRS)
Banks, P. M.
1986-01-01
Progress in the following areas is discussed: the design, planning and operation of a remote science payload operations control center; design and planning of a data link via satellite; and the design and prototyping of an advanced workstation environment for multi-media (3-D computer aided design/computer aided engineering, voice, video, text) communications and operations.
SAM: The "Search and Match" Computer Program of the Escherichia coli Genetic Stock Center
ERIC Educational Resources Information Center
Bachmann, B. J.; And Others
1973-01-01
Describes a computer program used at a genetic stock center to locate particular strains of bacteria. The program can match up to 30 strain descriptions requested by a researcher with the records on file. Uses of this particular program can be made in many fields. (PS)
Hibbing Community College's Community Computer Center.
ERIC Educational Resources Information Center
Regional Technology Strategies, Inc., Carrboro, NC.
This paper reports on the development of the Community Computer Center (CCC) at Hibbing Community College (HCC) in Minnesota. HCC is located in the largest U.S. iron mining area in the United States. Closures of steel-producing plants are affecting the Hibbing area. Outmigration, particularly of younger workers and their families, has been…
48 CFR 9905.506-60 - Illustrations.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., installs a computer service center to begin operations on May 1. The operating expense related to the new... operating expenses of the computer service center for the 8-month part of the cost accounting period may be... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Illustrations. 9905.506-60...
The Mathematics and Computer Science Learning Center (MLC).
ERIC Educational Resources Information Center
Abraham, Solomon T.
The Mathematics and Computer Science Learning Center (MLC) was established in the Department of Mathematics at North Carolina Central University during the fall semester of the 1982-83 academic year. The initial operations of the MLC were supported by grants to the University from the Burroughs-Wellcome Company and the Kenan Charitable Trust Fund.…
Film Library Information Management System.
ERIC Educational Resources Information Center
Minnella, C. Vincent; And Others
The computer program described not only allows the user to determine rental sources for a particular film title quickly, but also to select the least expensive of the sources. This program developed at SUNY Cortland's Sperry Learning Resources Center and Computer Center is designed to maintain accurate data on rental and purchase films in both…
1999-05-26
Looking for a faster computer? How about an optical computer that processes data streams simultaneously and works with the speed of light? In space, NASA researchers have formed optical thin-film. By turning these thin-films into very fast optical computer components, scientists could improve computer tasks, such as pattern recognition. Dr. Hossin Abdeldayem, physicist at NASA/Marshall Space Flight Center (MSFC) in Huntsville, Al, is working with lasers as part of an optical system for pattern recognition. These systems can be used for automated fingerprinting, photographic scarning and the development of sophisticated artificial intelligence systems that can learn and evolve. Photo credit: NASA/Marshall Space Flight Center (MSFC)
Knowledge management: Role of the the Radiation Safety Information Computational Center (RSICC)
NASA Astrophysics Data System (ADS)
Valentine, Timothy
2017-09-01
The Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 software packages that have been provided by code developers from various federal and international agencies. RSICC's customers (scientists, engineers, and students from around the world) obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programs both domestically and internationally, as the majority of RSICC's customers are students attending U.S. universities. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC's activities, services, and systems that support knowledge management and education and training in the nuclear field.
Computer Learning for Young Children.
ERIC Educational Resources Information Center
Choy, Anita Y.
1995-01-01
Computer activities that combine education and entertainment make learning easy and fun for preschoolers. Computers encourage social skills, language and literacy skills, cognitive development, problem solving, and eye-hand coordination. The paper describes one teacher's experiences setting up a computer center and using computers with…
Computer Literacy Project. A General Orientation in Basic Computer Concepts and Applications.
ERIC Educational Resources Information Center
Murray, David R.
This paper proposes a two-part, basic computer literacy program for university faculty, staff, and students with no prior exposure to computers. The program described would introduce basic computer concepts and computing center service programs and resources; provide fundamental preparation for other computer courses; and orient faculty towards…
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-10-16
A workshop was held at the RIKEN-BNL Research Center on October 16, 1998, as part of the first anniversary celebration for the center. This meeting brought together the physicists from RIKEN-BNL, BNL and Columbia who are using the QCDSP (Quantum Chromodynamics on Digital Signal Processors) computer at the RIKEN-BNL Research Center for studies of QCD. Many of the talks in the workshop were devoted to domain wall fermions, a discretization of the continuum description of fermions which preserves the global symmetries of the continuum, even at finite lattice spacing. This formulation has been the subject of analytic investigation for somemore » time and has reached the stage where large-scale simulations in QCD seem very promising. With the computational power available from the QCDSP computers, scientists are looking forward to an exciting time for numerical simulations of QCD.« less
Computational Science News | Computational Science | NREL
-Cooled High-Performance Computing Technology at the ESIF February 28, 2018 NREL Launches New Website for High-Performance Computing System Users The National Renewable Energy Laboratory (NREL) Computational Science Center has launched a revamped website for users of the lab's high-performance computing (HPC
Computer Training for Seniors: An Academic-Community Partnership
ERIC Educational Resources Information Center
Sanders, Martha J.; O'Sullivan, Beth; DeBurra, Katherine; Fedner, Alesha
2013-01-01
Computer technology is integral to information retrieval, social communication, and social interaction. However, only 47% of seniors aged 65 and older use computers. The purpose of this study was to determine the impact of a client-centered computer program on computer skills, attitudes toward computer use, and generativity in novice senior…
Benefits Analysis of Multi-Center Dynamic Weather Routes
NASA Technical Reports Server (NTRS)
Sheth, Kapil; McNally, David; Morando, Alexander; Clymer, Alexis; Lock, Jennifer; Petersen, Julien
2014-01-01
Dynamic weather routes are flight plan corrections that can provide airborne flights more than user-specified minutes of flying-time savings, compared to their current flight plan. These routes are computed from the aircraft's current location to a flight plan fix downstream (within a predefined limit region), while avoiding forecasted convective weather regions. The Dynamic Weather Routes automation has been continuously running with live air traffic data for a field evaluation at the American Airlines Integrated Operations Center in Fort Worth, TX since July 31, 2012, where flights within the Fort Worth Air Route Traffic Control Center are evaluated for time savings. This paper extends the methodology to all Centers in United States and presents benefits analysis of Dynamic Weather Routes automation, if it was implemented in multiple airspace Centers individually and concurrently. The current computation of dynamic weather routes requires a limit rectangle so that a downstream capture fix can be selected, preventing very large route changes spanning several Centers. In this paper, first, a method of computing a limit polygon (as opposed to a rectangle used for Fort Worth Center) is described for each of the 20 Centers in the National Airspace System. The Future ATM Concepts Evaluation Tool, a nationwide simulation and analysis tool, is used for this purpose. After a comparison of results with the Center-based Dynamic Weather Routes automation in Fort Worth Center, results are presented for 11 Centers in the contiguous United States. These Centers are generally most impacted by convective weather. A breakdown of individual Center and airline savings is presented and the results indicate an overall average savings of about 10 minutes of flying time are obtained per flight.
Library Media Learning and Play Center.
ERIC Educational Resources Information Center
Faber, Therese; And Others
Preschool educators developed a library media learning and play center to enable children to "experience" a library; establish positive attitudes about the library; and encourage respect for self, others, and property. The center had the following areas: check-in and check-out desk, quiet reading section, computer center, listening center, video…
NASA Technical Reports Server (NTRS)
Moore, Robert C.
1998-01-01
The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: (1) Automated Reasoning. (2) Human-Centered Computing. and (3) High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling.
The effective use of virtualization for selection of data centers in a cloud computing environment
NASA Astrophysics Data System (ADS)
Kumar, B. Santhosh; Parthiban, Latha
2018-04-01
Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.
Garzón-Alvarado, Diego A
2013-01-21
This article develops a model of the appearance and location of the primary centers of ossification in the calvaria. The model uses a system of reaction-diffusion equations of two molecules (BMP and Noggin) whose behavior is of type activator-substrate and its solution produces Turing patterns, which represents the primary ossification centers. Additionally, the model includes the level of cell maturation as a function of the location of mesenchymal cells. Thus the mature cells can become osteoblasts due to the action of BMP2. Therefore, with this model, we can have two frontal primary centers, two parietal, and one, two or more occipital centers. The location of these centers in the simplified computational model is highly consistent with those centers found at an embryonic level. Copyright © 2012 Elsevier Ltd. All rights reserved.
Establishment of a Beta Test Center for the NPARC Code at Central State University
NASA Technical Reports Server (NTRS)
Okhio, Cyril B.
1996-01-01
Central State University has received a supplementary award to purchase computer workstations for the NPARC (National Propulsion Ames Research Center) computational fluid dynamics code BETA Test Center. The computational code has also been acquired for installation on the workstations. The acquisition of this code is an initial step for CSU in joining an alliance composed of NASA, AEDC, The Aerospace Industry, and academia. A post-Doctoral research Fellow from a neighboring university will assist the PI in preparing a template for Tutorial documents for the BETA test center. The major objective of the alliance is to establish a national applications-oriented CFD capability, centered on the NPARC code. By joining the alliance, the BETA test center at CSU will allow the PI, as well as undergraduate and post-graduate students to test the capability of the NPARC code in predicting the physics of aerodynamic/geometric configurations that are of interest to the alliance. Currently, CSU is developing a once a year, hands-on conference/workshop based upon the experience acquired from running other codes similar to the NPARC code in the first year of this grant.
Michael Ernst
2017-12-09
As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,
Patient-centered computing: can it curb malpractice risk?
Bartlett, E E
1993-01-01
The threat of a medical malpractice suit represents a major cause of career dissatisfaction for American physicians. Patient-centered computing may improve physician-patient communications, thereby reducing liability risk. This review describes programs that have sought to enhance patient education and involvement pertaining to 5 major categories of malpractice lawsuits: Diagnosis, medications, obstetrics, surgery, and treatment errors.
Patient-centered computing: can it curb malpractice risk?
Bartlett, E. E.
1993-01-01
The threat of a medical malpractice suit represents a major cause of career dissatisfaction for American physicians. Patient-centered computing may improve physician-patient communications, thereby reducing liability risk. This review describes programs that have sought to enhance patient education and involvement pertaining to 5 major categories of malpractice lawsuits: Diagnosis, medications, obstetrics, surgery, and treatment errors. PMID:8130563
Initial Comparison of Single Cylinder Stirling Engine Computer Model Predictions with Test Results
NASA Technical Reports Server (NTRS)
Tew, R. C., Jr.; Thieme, L. G.; Miao, D.
1979-01-01
A Stirling engine digital computer model developed at NASA Lewis Research Center was configured to predict the performance of the GPU-3 single-cylinder rhombic drive engine. Revisions to the basic equations and assumptions are discussed. Model predictions with the early results of the Lewis Research Center GPU-3 tests are compared.
A Report on the Design and Construction of the University of Massachusetts Computer Science Center.
ERIC Educational Resources Information Center
Massachusetts State Office of the Inspector General, Boston.
This report describes a review conducted by the Massachusetts Office of the Inspector General on the construction of the Computer Science and Development Center at the University of Massachusetts, Amherst. The office initiated the review after hearing concerns about the management of the project, including its delayed completion and substantial…
Higher-order methods for simulations on quantum computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sornborger, A.T.; Stewart, E.D.
1999-09-01
To implement many-qubit gates for use in quantum simulations on quantum computers efficiently, we develop and present methods reexpressing exp[[minus]i(H[sub 1]+H[sub 2]+[center dot][center dot][center dot])[Delta]t] as a product of factors exp[[minus]iH[sub 1][Delta]t], exp[[minus]iH[sub 2][Delta]t],[hor ellipsis], which is accurate to third or fourth order in [Delta]t. The methods we derive are an extended form of the symplectic method, and can also be used for an integration of classical Hamiltonians on classical computers. We derive both integral and irrational methods, and find the most efficient methods in both cases. [copyright] [ital 1999] [ital The American Physical Society
NASA Astrophysics Data System (ADS)
Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.
2017-12-01
In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.
Finance for practicing radiologists.
Berlin, Jonathan W; Lexa, Frank James
2005-03-01
This article reviews basic finance for radiologists. Using the example of a hypothetical outpatient computed tomography center, readers are introduced to the concept of net present value. This concept refers to the current real value of anticipated income in the future, realizing that revenue in the future has less value than it does today. Positive net present value projects add wealth to a practice and should be pursued. The article details how costs and revenues for a hypothetical outpatient computed tomography center are determined and elucidates the difference between fixed costs and variable costs. The article provides readers with the steps used to calculate the break-even volume for an outpatient computed tomography center given situation-specific assumptions regarding staff, equipment lease rates, rent, and third-party payer mix.
Readiness of healthcare providers for eHealth: the case from primary healthcare centers in Lebanon.
Saleh, Shadi; Khodor, Rawya; Alameddine, Mohamad; Baroud, Maysa
2016-11-10
eHealth can positively impact the efficiency and quality of healthcare services. Its potential benefits extend to the patient, healthcare provider, and organization. Primary healthcare (PHC) settings may particularly benefit from eHealth. In these settings, healthcare provider readiness is key to successful eHealth implementation. Accordingly, it is necessary to explore the potential readiness of providers to use eHealth tools. Therefore, the purpose of this study was to assess the readiness of healthcare providers working in PHC centers in Lebanon to use eHealth tools. A self-administered questionnaire was used to assess participants' socio-demographics, computer use, literacy, and access, and participants' readiness for eHealth implementation (appropriateness, management support, change efficacy, personal beneficence). The study included primary healthcare providers (physicians, nurses, other providers) working in 22 PHC centers distributed across Lebanon. Descriptive and bivariate analyses (ANOVA, independent t-test, Kruskal Wallis, Tamhane's T2) were used to compare participant characteristics to the level of readiness for the implementation of eHealth. Of the 541 questionnaires, 213 were completed (response rate: 39.4 %). The majority of participants were physicians (46.9 %), and nurses (26.8 %). Most physicians (54.0 %), nurses (61.4 %), and other providers (50.9 %) felt comfortable using computers, and had access to computers at their PHC center (physicians: 77.0 %, nurses: 87.7 %, others: 92.5 %). Frequency of computer use varied. The study found a significant difference for personal beneficence, management support, and change efficacy among different healthcare providers, and relative to participants' level of comfort using computers. There was a significant difference by level of comfort using computers and appropriateness. A significant difference was also found between those with access to computers in relation to personal beneficence and change efficacy; and between frequency of computer use and change efficacy. The implementation of eHealth cannot be achieved without the readiness of healthcare providers. This study demonstrates that the majority of healthcare providers at PHC centers across Lebanon are ready for eHealth implementation. The findings of this study can be considered by decision makers to enhance and scale-up the use of eHealth in PHC centers nationally. Efforts should be directed towards capacity building for healthcare providers.
Selected Papers of the Southeastern Writing Center Association.
ERIC Educational Resources Information Center
Roberts, David H., Ed.; Wolff, William C., Ed.
Addressing a variety of concerns of writing center directors and staff, directors of freshman composition, and English department chairs, the papers in this collection discuss writing center research and evaluation, writing center tutors, and computers in the writing center. The titles of the essays and their authors are as follows: (1) "Narrative…
The Hospital-Based Drug Information Center.
ERIC Educational Resources Information Center
Hopkins, Leigh
1982-01-01
Discusses the rise of drug information centers in hospitals and medical centers, highlighting staffing, functions, typical categories of questions received by centers, and sources used. An appendix of drug information sources included in texts, updated services, journals, and computer databases is provided. Thirteen references are listed. (EJS)
Singularity: Scientific containers for mobility of compute.
Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.
Singularity: Scientific containers for mobility of compute
Kurtzer, Gregory M.; Bauer, Michael W.
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014
Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces
NASA Technical Reports Server (NTRS)
Ellman, Alvin; Carlton, Magdi
1993-01-01
The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.
Computer Center: Setting Up a Microcomputer Center--1 Person's Perspective.
ERIC Educational Resources Information Center
Duhrkopf, Richard, Ed.; Collins, Michael, A. J., Ed.
1988-01-01
Considers eight components to be considered in setting up a microcomputer center for use with college classes. Discussions include hardware, software, physical facility, furniture, technical support, personnel, continuing financial expenditures, and security. (CW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pete Beckman and Ian Foster
Chicago Matters: Beyond Burnham (WTTW). Chicago has become a world center of "cloud computing." Argonne experts Pete Beckman and Ian Foster explain what "cloud computing" is and how you probably already use it on a daily basis.
NASA Technical Reports Server (NTRS)
Pitts, William C; Nielsen, Jack N; Kaattari, George E
1957-01-01
A method is presented for calculating the lift and centers of pressure of wing-body and wing-body-tail combinations at subsonic, transonic, and supersonic speeds. A set of design charts and a computing table are presented which reduce the computations to routine operations. Comparison between the estimated and experimental characteristics for a number of wing-body and wing-body-tail combinations shows correlation to within + or - 10 percent on lift and to within about + or - 0.02 of the body length on center of pressure.
The Role of Computers in Research and Development at Langley Research Center
NASA Technical Reports Server (NTRS)
Wieseman, Carol D. (Compiler)
1994-01-01
This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics.
Mass storage system experiences and future needs at the National Center for Atmospheric Research
NASA Technical Reports Server (NTRS)
Olear, Bernard T.
1991-01-01
A summary and viewgraphs of a discussion presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. Some of the experiences of the Scientific Computing Division at the National Center for Atmospheric Research (NCAR) dealing the the 'data problem' are discussed. A brief history and a development of some basic mass storage system (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. Future MSS needs for future computing environments is discussed.
ERIC Educational Resources Information Center
Insolia, Gerard
This document contains course outlines in computer-aided manufacturing developed for a business-industry technology resource center for firms in eastern Pennsylvania by Northampton Community College. The four units of the course cover the following: (1) introduction to computer-assisted design (CAD)/computer-assisted manufacturing (CAM); (2) CAM…
NASA Technical Reports Server (NTRS)
Harrison, Cecil A.
1986-01-01
The efforts to automate the electromagentic compatibility (EMC) test facilites at Marshall Flight Center were examined. A battery of nine standard tests is to be integrated by means of a desktop computer-controller in order to provide near real-time data assessment, store the data acquired during testing on flexible disk, and provide computer production of the certification report.
Mind Transplants Or: The Role of Computer Assisted Instruction in the Future of the Library.
ERIC Educational Resources Information Center
Lyon, Becky J.
Computer assisted instruction (CAI) may well represent the next phase in the involvement of the library or learning resources center with media and the educational process. The Lister Hill Center Experimental CAI Network was established in July, 1972, on the recommendation of the National Library of Medicine, to test the feasibility of sharing CAI…
Center for Efficient Exascale Discretizations Software Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolev, Tzanio; Dobrev, Veselin; Tomov, Vladimir
The CEED Software suite is a collection of generally applicable software tools focusing on the following computational motives: PDE discretizations on unstructured meshes, high-order finite element and spectral element methods and unstructured adaptive mesh refinement. All of this software is being developed as part of CEED, a co-design Center for Efficient Exascale Discretizations, within DOE's Exascale Computing Project (ECP) program.
ERIC Educational Resources Information Center
Santoso, Harry B.; Batuparan, Alivia Khaira; Isal, R. Yugo K.; Goodridge, Wade H.
2018-01-01
Student Centered e-Learning Environment (SCELE) is a Moodle-based learning management system (LMS) that has been modified to enhance learning within a computer science department curriculum offered by the Faculty of Computer Science of large public university in Indonesia. This Moodle provided a mechanism to record students' activities when…
ERIC Educational Resources Information Center
Skowronski, Steven D.
This student guide provides materials for a course designed to instruct the student in the recommended procedures used when setting up tooling and verifying part programs for a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 discusses course content and reviews and demonstrates set-up procedures…
ERIC Educational Resources Information Center
Buckley, Elizabeth; Johnston, Peter
In February 1977, computer assisted instruction (CAI) was introducted to the Great Neck Adult Learning Centers (GNALC) to promote greater cognitive and affective growth of educationally disadvantaged adults. The project expanded to include not only adult basic education (ABE) students studying in the learning laboratory, but also ABE students…
The Development of a Robot-Based Learning Companion: A User-Centered Design Approach
ERIC Educational Resources Information Center
Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong
2015-01-01
A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…
CENTER CONDITIONS AND CYCLICITY FOR A FAMILY OF CUBIC SYSTEMS: COMPUTER ALGEBRA APPROACH.
Ferčec, Brigita; Mahdi, Adam
2013-01-01
Using methods of computational algebra we obtain an upper bound for the cyclicity of a family of cubic systems. We overcame the problem of nonradicality of the associated Bautin ideal by moving from the ring of polynomials to a coordinate ring. Finally, we determine the number of limit cycles bifurcating from each component of the center variety.
About High-Performance Computing at NREL | High-Performance Computing |
Day(s): First Thursday of every month Hours: 11 a.m. - 12 p.m. Location: ESIF B211-Edison Conference Room Contact: Jennifer Southerland Insight Center - Visualization Tools Day(s): Every Monday Hours: 10 Data System Day(s): Every Monday Hours: 10 a.m. - 11 a.m. Location: ESIF B308-Insight Center
ERIC Educational Resources Information Center
Tsai, Chia-Wen; Shen, Pei-Di; Lin, Rong-An
2015-01-01
This study investigated, via quasi-experiments, the effects of student-centered project-based learning with initiation (SPBL with Initiation) on the development of students' computing skills. In this study, 96 elementary school students were selected from four class sections taking a course titled "Digital Storytelling" and were assigned…
EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.
HPCCP/CAS Workshop Proceedings 1998
NASA Technical Reports Server (NTRS)
Schulbach, Catherine; Mata, Ellen (Editor); Schulbach, Catherine (Editor)
1999-01-01
This publication is a collection of extended abstracts of presentations given at the HPCCP/CAS (High Performance Computing and Communications Program/Computational Aerosciences Project) Workshop held on August 24-26, 1998, at NASA Ames Research Center, Moffett Field, California. The objective of the Workshop was to bring together the aerospace high performance computing community, consisting of airframe and propulsion companies, independent software vendors, university researchers, and government scientists and engineers. The Workshop was sponsored by the HPCCP Office at NASA Ames Research Center. The Workshop consisted of over 40 presentations, including an overview of NASA's High Performance Computing and Communications Program and the Computational Aerosciences Project; ten sessions of papers representative of the high performance computing research conducted within the Program by the aerospace industry, academia, NASA, and other government laboratories; two panel sessions; and a special presentation by Mr. James Bailey.
NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report
NASA Technical Reports Server (NTRS)
Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ
2013-01-01
The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities
ED adds business center to wait area.
2007-10-01
Providing your patients with Internet access in the waiting area can do wonders for their attitudes and make them much more understanding of long wait times. What's more, it doesn't take a fortune to create a business center. The ED at Florida Hospital Celebration (FL) Health made a world of difference with just a couple of computers and a printer. Have your information technology staff set the computers up to preserve the privacy of your internal computer system, and block out offensive sites. Access to medical sites can help reinforce your patient education efforts.
Application of technology developed for flight simulation at NASA. Langley Research Center
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1991-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations including mathematical model computation and data input/output to the simulators must be deterministic and be completed in as short a time as possible. Personnel at NASA's Langley Research Center are currently developing the use of supercomputers for simulation mathematical model computation for real-time simulation. This, coupled with the use of an open systems software architecture, will advance the state-of-the-art in real-time flight simulation.
Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sego, Landon H.; Marquez, Andres; Rawson, Andrew
2013-06-30
As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented,more » high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.« less
ENVIRONMENTAL BIOINFORMATICS AND COMPUTATIONAL TOXICOLOGY CENTER
The Center activities focused on integrating developmental efforts from the various research projects of the Center, and collaborative applications involving scientists from other institutions and EPA, to enhance research in critical areas. A representative sample of specif...
Improving User Notification on Frequently Changing HPC Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuson, Christopher B; Renaud, William A
2016-01-01
Today s HPC centers user environments can be very complex. Centers often contain multiple large complicated computational systems each with their own user environment. Changes to a system s environment can be very impactful; however, a center s user environment is, in one-way or another, frequently changing. Because of this, it is vital for centers to notify users of change. For users, untracked changes can be costly, resulting in unnecessary debug time as well as wasting valuable compute allocations and research time. Communicating frequent change to diverse user communities is a common and ongoing task for HPC centers. This papermore » will cover the OLCF s current processes and methods used to communicate change to users of the center s large Cray systems and supporting resources. The paper will share lessons learned and goals as well as practices, tools, and methods used to continually improve and reach members of the OLCF user community.« less
Robust pupil center detection using a curvature algorithm
NASA Technical Reports Server (NTRS)
Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)
1999-01-01
Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.
NASA Technical Reports Server (NTRS)
Moore, Robert C.
1998-01-01
The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. Research is carried out by a staff of full-time scientist,augmented by visitors, students, post doctoral candidates and visiting university faculty. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: Automated Reasoning. Human-Centered Computing. and High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling.
NASA Technical Reports Server (NTRS)
Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.
1986-01-01
Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.
Bayesian Research at the NASA Ames Research Center,Computational Sciences Division
NASA Technical Reports Server (NTRS)
Morris, Robin D.
2003-01-01
NASA Ames Research Center is one of NASA s oldest centers, having started out as part of the National Advisory Committee on Aeronautics, (NACA). The site, about 40 miles south of San Francisco, still houses many wind tunnels and other aviation related departments. In recent years, with the growing realization that space exploration is heavily dependent on computing and data analysis, its focus has turned more towards Information Technology. The Computational Sciences Division has expanded rapidly as a result. In this article, I will give a brief overview of some of the past and present projects with a Bayesian content. Much more than is described here goes on with the Division. The web pages at http://ic.arc. nasa.gov give more information on these, and the other Division projects.
Cyber Security: Big Data Think II Working Group Meeting
NASA Technical Reports Server (NTRS)
Hinke, Thomas; Shaw, Derek
2015-01-01
This presentation focuses on approaches that could be used by a data computation center to identify attacks and ensure malicious code and backdoors are identified if planted in system. The goal is to identify actionable security information from the mountain of data that flows into and out of an organization. The approaches are applicable to big data computational center and some must also use big data techniques to extract the actionable security information from the mountain of data that flows into and out of a data computational center. The briefing covers the detection of malicious delivery sites and techniques for reducing the mountain of data so that intrusion detection information can be useful, and not hidden in a plethora of false alerts. It also looks at the identification of possible unauthorized data exfiltration.
Postdoctoral Fellow | Center for Cancer Research
The Neuro-Oncology Branch (NOB), Center for Cancer Research (CCR), National Cancer Institute (NCI) of the National Institutes of Health (NIH) is seeking outstanding postdoctoral candidates interested in studying metabolic and cell signaling pathways in the context of brain cancers through construction of computational models amenable to formal computational analysis and simulation. The ability to closely collaborate with the modern metabolomics center developed at CCR provides a unique opportunity for a postdoctoral candidate with a strong theoretical background and interest in demonstrating the incredible potential of computational approaches to solve problems from scientific disciplines and improve lives. The candidate will be given the opportunity to both construct data-driven models, as well as biologically validate the models by demonstrating the ability to predict the effects of altering tumor metabolism in laboratory and clinical settings.
Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds
NASA Astrophysics Data System (ADS)
Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni
2012-09-01
Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.
Argonne Out Loud: Computation, Big Data, and the Future of Cities
Catlett, Charlie
2018-01-16
Charlie Catlett, a Senior Computer Scientist at Argonne and Director of the Urban Center for Computation and Data at the Computation Institute of the University of Chicago and Argonne, talks about how he and his colleagues are using high-performance computing, data analytics, and embedded systems to better understand and design cities.
LaRC local area networks to support distributed computing
NASA Technical Reports Server (NTRS)
Riddle, E. P.
1984-01-01
The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.
CILT2000: Ubiquitous Computing--Spanning the Digital Divide.
ERIC Educational Resources Information Center
Tinker, Robert; Vahey, Philip
2002-01-01
Discusses the role of ubiquitous and handheld computers in education. Summarizes the contributions of the Center for Innovative Learning Technologies (CILT) and describes the ubiquitous computing sessions at the CILT2000 Conference. (Author/YDS)
ERIC Educational Resources Information Center
Zamora, Ramon M.
Alternative learning environments offering computer-related instruction are developing around the world. Storefront learning centers, museum-based computer facilities, and special theme parks are some of the new concepts. ComputerTown, USA! is a public access computer literacy project begun in 1979 to serve both adults and children in Menlo Park…
Anaglyph, Landsat overlay Honolulu, Hawaii
NASA Technical Reports Server (NTRS)
2000-01-01
Honolulu, on the island of Oahu, is a large and growing urban area with limited space and water resources. This anaglyph, combining a Landsat image with SRTM topography, shows how the topography controls the urban growth pattern, causes cloud formation, and directs the rainfall runoff pattern. Red/blue glasses are required to see the 3-D effect. Features of interest in this scene include Diamond Head (an extinct volcano on the right side of the image), Waikiki Beach (just left of Diamond Head), the Punchbowl National Cemetary (another extinct volcano, left of center), downtown Honolulu and Honolulu harbor (lower left of center), and offshore reef patterns. The slopes of the Koolau mountain range are seen in the upper half of the image. Clouds commonly hang above ridges and peaks of the Hawaiian Islands, and in this rendition appear draped directly on the mountains. The clouds are actually about 1000 meters (3300 feet) above sea level. High resolution topographic and image data allow ecologists and planners to assess the effects of urban development on the sensitive ecosystems in tropical regions.This anaglyph was generated using topographic data from the Shuttle Radar Topography Mission, combined with a Landsat 7 satellite image collected coincident with the SRTM mission. The topography data are used to create two differing perspectives of a single image, one perspective for each eye. Each point in the image is shifted slightly, depending on its elevation. When viewed through special glasses, the result is a vertically exaggerated view of the Earth's surface in its full three dimensions. Anaglyph glasses cover the left eye with a red filter and cover the right eye with a blue filter. The United States Geological Survey's Earth Resources Observations Systems (EROS) DataCenter, Sioux Falls, South Dakota, provided the Landsat data.The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) and the German (DLR) and Italian (ASI) space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.Size: 18 by 28 kilometers (11 by 17 miles) Location: 21.3 deg. North lat., 157.9 deg. West lon. Orientation: North toward upper left Original Data Resolution: SRTM, 30 meters (99 feet); Landsat, 15 meters (50 feet) Date Acquired: SRTM, February 18, 2000; Landsat February 12, 2000ERIC Educational Resources Information Center
Association for Educational Data Systems, Washington, DC.
Fifteen papers on computer centers and data processing management presented at the Association for Educational Data Systems (AEDS) 1976 convention are included in this document. The first two papers review the recent controversy for proposed licensing of data processors, and they are followed by a description of the Institute for Certification of…
37. Photograph of plan for repairs to computer room, 1958, ...
37. Photograph of plan for repairs to computer room, 1958, prepared by the Public Works Office, Underwater Sound Laboratory. Drawing on file at Caretaker Site Office, Naval Undersea Warfare Center, New London. Copyright-free. - Naval Undersea Warfare Center, Bowditch Hall, 600 feet east of Smith Street & 350 feet south of Columbia Cove, West bank of Thames River, New London, New London County, CT
ERIC Educational Resources Information Center
Vakil, Sepehr
2018-01-01
In this essay, Sepehr Vakil argues that a more serious engagement with critical traditions in education research is necessary to achieve a justice-centered approach to equity in computer science (CS) education. With CS rapidly emerging as a distinct feature of K-12 public education in the United States, calls to expand CS education are often…
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data drive...
CHIRAL--A Computer Aided Application of the Cahn-Ingold-Prelog Rules.
ERIC Educational Resources Information Center
Meyer, Edgar F., Jr.
1978-01-01
A computer program is described for identification of chiral centers in molecules. Essential input to the program includes both atomic and bonding information. The program does not require computer graphic input-output. (BB)
Facilities | Integrated Energy Solutions | NREL
strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed
Computers and Technological Forecasting
ERIC Educational Resources Information Center
Martino, Joseph P.
1971-01-01
Forecasting is becoming increasingly automated, thanks in large measure to the computer. It is now possible for a forecaster to submit his data to a computation center and call for the appropriate program. (No knowledge of statistics is required.) (Author)
Applied Computational Fluid Dynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
1994-01-01
The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.
A resource-sharing model based on a repeated game in fog computing.
Sun, Yan; Zhang, Nan
2017-03-01
With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.
Design of a robotic vehicle with self-contained intelligent wheels
NASA Astrophysics Data System (ADS)
Poulson, Eric A.; Jacob, John S.; Gunderson, Robert W.; Abbott, Ben A.
1998-08-01
The Center for Intelligent Systems has developed a small robotic vehicle named the Advanced Rover Chassis 3 (ARC 3) with six identical intelligent wheel units attached to a payload via a passive linkage suspension system. All wheels are steerable, so the ARC 3 can move in any direction while rotating at any rate allowed by the terrain and motors. Each intelligent wheel unit contains a drive motor, steering motor, batteries, and computer. All wheel units are identical, so manufacturing, programing, and spare replacement are greatly simplified. The intelligent wheel concept would allow the number and placement of wheels on the vehicle to be changed with no changes to the control system, except to list the position of all the wheels relative to the vehicle center. The task of controlling the ARC 3 is distributed between one master computer and the wheel computers. Tasks such as controlling the steering motors and calculating the speed of each wheel relative to the vehicle speed in a corner are dependent on the location of a wheel relative to the vehicle center and ar processed by the wheel computers. Conflicts between the wheels are eliminated by computing the vehicle velocity control in the master computer. Various approaches to this distributed control problem, and various low level control methods, have been explored.
Berkeley Lab - Materials Sciences Division
Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Facilities and Centers Staff Center for X-ray Optics Patrick Naulleau Director 510-486-4529 2-432 PNaulleau
Otsuna, Hideo; Shinomiya, Kazunori; Ito, Kei
2014-01-01
Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior. PMID:24574974
Current state and future direction of computer systems at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Rogers, James L. (Editor); Tucker, Jerry H. (Editor)
1992-01-01
Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.
Future of Department of Defense Cloud Computing Amid Cultural Confusion
2013-03-01
enterprise cloud - computing environment and transition to a public cloud service provider. Services have started the development of individual cloud - computing environments...endorsing cloud computing . It addresses related issues in matters of service culture changes and how strategic leaders will dictate the future of cloud ...through data center consolidation and individual Service provided cloud computing .
Computer Needs and Computer Problems in Developing Countries.
ERIC Educational Resources Information Center
Huskey, Harry D.
A survey of the computer environment in a developing country is provided. Levels of development are considered and the educational requirements of countries at various levels are discussed. Computer activities in India, Burma, Pakistan, Brazil and a United Nations sponsored educational center in Hungary are all described. (SK/Author)
Computer Viruses. Legal and Policy Issues Facing Colleges and Universities.
ERIC Educational Resources Information Center
Johnson, David R.; And Others
Compiled by various members of the higher educational community together with risk managers, computer center managers, and computer industry experts, this report recommends establishing policies on an institutional level to protect colleges and universities from computer viruses and the accompanying liability. Various aspects of the topic are…
PACCE: Perl Algorithm to Compute Continuum and Equivalent Widths
NASA Astrophysics Data System (ADS)
Riffel, Rogério; Borges Vale, Tibério
2011-05-01
PACCE (Perl Algorithm to Compute continuum and Equivalent Widths) computes continuum and equivalent widths. PACCE is able to determine mean continuum and continuum at line center values, which are helpful in stellar population studies, and is also able to compute the uncertainties in the equivalent widths using photon statistics.
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data drive...
76 FR 1410 - Privacy Act of 1974; Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-10
...; Computer Matching Program AGENCY: Defense Manpower Data Center (DMDC), DoD. ACTION: Notice of a Computer... administrative burden, constitute a greater intrusion of the individual's privacy, and would result in additional... Liaison Officer, Department of Defense. Notice of a Computer Matching Program Among the Defense Manpower...
Computers, Networks, and Desegregation at San Jose High Academy.
ERIC Educational Resources Information Center
Solomon, Gwen
1987-01-01
Describes magnet high school which was created in California to meet desegregation requirements and emphasizes computer technology. Highlights include local computer networks that connect science and music labs, the library/media center, business computer lab, writing lab, language arts skills lab, and social studies classrooms; software; teacher…
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
ERIC Educational Resources Information Center
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
Holkenbrink, Patrick F.
1978-01-01
Landsat data are received by National Aeronautics and Space Administration (NASA) tracking stations and converted into digital form on high-density tapes (HDTs) by the Image Processing Facility (IPF) at the Goddard Space Flight Center (GSFC), Greenbelt, Maryland. The HDTs are shipped to the EROS Data Center (EDC) where they are converted into customer products by the EROS Data Center digital image processing system (EDIPS). This document describes in detail one of these products: the computer-compatible tape (CCT) produced from Landsat-1, -2, and -3 multispectral scanner (MSS) data and Landsat-3 only return-beam vidicon (RBV) data. Landsat-1 and -2 RBV data will not be processed by IPF/EDIPS to CCT format.
Chenoweth, Lynn; Vickland, Victor; Stein-Parbury, Jane; Jeon, Yun-Hee; Kenny, Patricia; Brodaty, Henry
2015-10-01
To answer questions on the essential components (services, operations and resources) of a person-centered aged care home (iHome) using computer simulation. iHome was developed with AnyLogic software using extant study data obtained from 60 Australian aged care homes, 900+ clients and 700+ aged care staff. Bayesian analysis of simulated trial data will determine the influence of different iHome characteristics on care service quality and client outcomes. Interim results: A person-centered aged care home (socio-cultural context) and care/lifestyle services (interactional environment) can produce positive outcomes for aged care clients (subjective experiences) in the simulated environment. Further testing will define essential characteristics of a person-centered care home.
3-D perspective of Saint Pierre and Miquelon Islands
NASA Technical Reports Server (NTRS)
2000-01-01
This image shows two islands, Miquelon and Saint Pierre, located south of Newfoundland, Canada. These islands, along with five smaller islands, are a self-governing territory of France. A thin barrier beach divides Miquelon, with Grande Miquelon to the north and Petite Miquelon to the south. Saint Pierre Island is located to the lower right. With the islands' location in the north Atlantic Ocean and their deep water ports, fishing is the major part of the economy. The maximum elevation of the island is 240 meters (787 feet). The land mass of the islands is about 242 square kilometers, or 1.5 times the size of Washington DC.This image shows how data collected by the Shuttle Radar Topography Mission (SRTM) can be used to enhance other satellite images. Color and natural shading are provided by a Landsat 7 image acquired on September 1, 1999. Terrain perspective and shading were derived from SRTM elevation data acquired on February 12, 2000. Topography is exaggerated by about six times vertically. The United States Geological Survey's Earth Resources Observations Systems (EROS) DataCenter, Sioux Falls, South Dakota, provided the Landsat data.Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.Multi-Parameter GEOSCOPE Land Stations and Ocean Bottom Observatories
NASA Astrophysics Data System (ADS)
Stutzmann, E.; Roult, G.; Montagner, J.; Karczewski, J.; Geoscope Group,.
2002-12-01
Since the mid-eighties, the number of high quality broadband stations installed over the world has increased in a spectacular way and these stations cover now most of the emerged lands, with a higher concentration in the northern hemisphere between the latitudes 0 and 60N. The GEOSCOPE program has been the first global network of 3-component broadband seismic stations and has contributed to the elaboration of the FDSN. The 25 broadband GEOSCOPE stations fulfill the FDSN criteria, the data are stored in data-centers at the IPGP and IRIS DMC and are accessible through the netdc procedure (email to netdc@ipgp.jussieu.fr) or via the web. GEOSCOPE data have contributed to the major progresses of our knowledge of the earth interior. The earthquake CMT are now determined in a routine manner and the temporal history of the seismic source is accessible for the largest earthquakes. The geographical distribution of GEOSCOPE stations, specially in the southern hemisphere, has also played an important role in improving the resolution of global tomographic models. Recent P-wave models give high resolution images of subducted slabs whereas recent S-wave models have enable to discover the two super-swells. The evolution of the GEOSCOPE program is now focused on the installation of multi-parameter stations including 3-component broadband seismic sensors, pressure gauge, thermometer and GPS receivers. These stations will enable to decrease the seismic noise, and -among others- to better characterize the background free oscillations of the earth. The seismic station earth coverage is limited by the presence of oceans and the next step is the installation of ocean bottom broadband stations. For this purpose, in 1992, a first temporary station sismobs/OFM has been working for 2 weeks, along the mid atlantic ridge. In 1997, an international multi-parameter station, MOISE has been operating during 3 months in the Monterey Bay. The further step is the installation of the station NERO in a ODP hole along the Ninety East Ridge in collaboration with IFREMER and JAMSTEC. The location of this station has been decided by the ION program and the station will be operating for at least 2 years.
Kamchatka Peninsula, Russia 3-D Perspective with Landsat Overlay
NASA Technical Reports Server (NTRS)
2000-01-01
This three-dimensional perspective view, looking up the Tigil River, shows the western side of the volcanically active Kamchatka Peninsula, Russia. The image shows that the Tigil River has eroded down from a higher and differing landscape and now flows through, rather than around the large green-colored bedrock ridge in the foreground. The older surface was likely composed of volcanic ash and debris from eruptions of nearby volcanoes. The green tones indicate that denser vegetation grows on south facing sunlit slopes at the northern latitudes. High resolution SRTM elevation data will be used by geologists to study how rivers shape the landscape, and by ecologists to study the influence of topography on ecosystems.This image shows how data collected by the Shuttle Radar Topography Mission (SRTM) can be used to enhance other satellite images. Color and natural shading are provided by a Landsat 7 image acquired on January 31, 2000. Terrain perspective and shading were derived from SRTM elevation data acquired on February 12, 2000. Topography is exaggerated by about six times vertically. The United States Geological Survey's Earth Resources Observations Systems (EROS) DataCenter, Sioux Falls, South Dakota, provided the Landsat data.The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) and the German (DLR) and Italian (ASI) space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.Size: 71 km (44 miles) x 20 km (12 miles) Location: 57 deg. North lat., 159 deg. East lon. Orientation: Looking to the east Original Data Resolution: 30 meters (99 feet) Date Acquired: February 12, 200077 FR 38630 - Open Internet Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
... Computer Science and Co-Founder of the Berkman Center for Internet and Society, Harvard University, is... of Technology Computer Science and Artificial Intelligence Laboratory, is appointed vice-chairperson... Jennifer Rexford, Professor of Computer Science, Princeton University Dennis Roberson, Vice Provost...
Exposure Science and the US EPA National Center for Computational Toxicology
The emerging field of computational toxicology applies mathematical and computer models and molecular biological and chemical approaches to explore both qualitative and quantitative relationships between sources of environmental pollutant exposure and adverse health outcomes. The...
NASA Technical Reports Server (NTRS)
Salmon, Ellen
1996-01-01
The data storage and retrieval demands of space and Earth sciences researchers have made the NASA Center for Computational Sciences (NCCS) Mass Data Storage and Delivery System (MDSDS) one of the world's most active Convex UniTree systems. Science researchers formed the NCCS's Computer Environments and Research Requirements Committee (CERRC) to relate their projected supercomputing and mass storage requirements through the year 2000. Using the CERRC guidelines and observations of current usage, some detailed projections of requirements for MDSDS network bandwidth and mass storage capacity and performance are presented.
TOSCA calculations and measurements for the SLAC SLC damping ring dipole magnet
NASA Astrophysics Data System (ADS)
Early, R. A.; Cobb, J. K.
1985-04-01
The SLAC damping ring dipole magnet was originally designed with removable nose pieces at the ends. Recently, a set of magnetic measurements was taken of the vertical component of induction along the center of the magnet for four different pole-end configurations and several current settings. The three dimensional computer code TOSCA, which is currently installed on the National Magnetic Fusion Energy Computer Center's Cray X-MP, was used to compute field values for the four configurations at current settings near saturation. Comparisons were made for magnetic induction as well as effective magnetic lengths for the different configurations.
NASA Technical Reports Server (NTRS)
Bennett, Jerome (Technical Monitor)
2002-01-01
The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.
CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.
Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan
2017-06-24
The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn 2 ) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to accelerate multiple sequence alignment. Besides, adopting the co-run computation model can maximize the entire system utilization significantly. The source code is available at https://github.com/wangvsa/CMSA .
UC Merced Center for Computational Biology Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colvin, Michael; Watanabe, Masakatsu
Final report for the UC Merced Center for Computational Biology. The Center for Computational Biology (CCB) was established to support multidisciplinary scientific research and academic programs in computational biology at the new University of California campus in Merced. In 2003, the growing gap between biology research and education was documented in a report from the National Academy of Sciences, Bio2010 Transforming Undergraduate Education for Future Research Biologists. We believed that a new type of biological sciences undergraduate and graduate programs that emphasized biological concepts and considered biology as an information science would have a dramatic impact in enabling the transformationmore » of biology. UC Merced as newest UC campus and the first new U.S. research university of the 21st century was ideally suited to adopt an alternate strategy - to create a new Biological Sciences majors and graduate group that incorporated the strong computational and mathematical vision articulated in the Bio2010 report. CCB aimed to leverage this strong commitment at UC Merced to develop a new educational program based on the principle of biology as a quantitative, model-driven science. Also we expected that the center would be enable the dissemination of computational biology course materials to other university and feeder institutions, and foster research projects that exemplify a mathematical and computations-based approach to the life sciences. As this report describes, the CCB has been successful in achieving these goals, and multidisciplinary computational biology is now an integral part of UC Merced undergraduate, graduate and research programs in the life sciences. The CCB began in fall 2004 with the aid of an award from U.S. Department of Energy (DOE), under its Genomes to Life program of support for the development of research and educational infrastructure in the modern biological sciences. This report to DOE describes the research and academic programs made possible by the CCB from its inception until August, 2010, at the end of the final extension. Although DOE support for the center ended in August 2010, the CCB will continue to exist and support its original objectives. The research and academic programs fostered by the CCB have led to additional extramural funding from other agencies, and we anticipate that CCB will continue to provide support for quantitative and computational biology program at UC Merced for many years to come. Since its inception in fall 2004, CCB research projects have continuously had a multi-institutional collaboration with Lawrence Livermore National Laboratory (LLNL), and the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, as well as individual collaborators at other sites. CCB affiliated faculty cover a broad range of computational and mathematical research including molecular modeling, cell biology, applied math, evolutional biology, bioinformatics, etc. The CCB sponsored the first distinguished speaker series at UC Merced, which had an important role is spreading the word about the computational biology emphasis at this new campus. One of CCB's original goals is to help train a new generation of biologists who bridge the gap between the computational and life sciences. To archive this goal, by summer 2006, a new program - summer undergraduate internship program, have been established under CCB to train the highly mathematical and computationally intensive Biological Science researchers. By the end of summer 2010, 44 undergraduate students had gone through this program. Out of those participants, 11 students have been admitted to graduate schools and 10 more students are interested in pursuing graduate studies in the sciences. The center is also continuing to facilitate the development and dissemination of undergraduate and graduate course materials based on the latest research in computational biology.« less
Computing Protein-Protein Association Affinity with Hybrid Steered Molecular Dynamics.
Rodriguez, Roberto A; Yu, Lili; Chen, Liao Y
2015-09-08
Computing protein-protein association affinities is one of the fundamental challenges in computational biophysics/biochemistry. The overwhelming amount of statistics in the phase space of very high dimensions cannot be sufficiently sampled even with today's high-performance computing power. In this article, we extend a potential of mean force (PMF)-based approach, the hybrid steered molecular dynamics (hSMD) approach we developed for ligand-protein binding, to protein-protein association problems. For a protein complex consisting of two protomers, P1 and P2, we choose m (≥3) segments of P1 whose m centers of mass are to be steered in a chosen direction and n (≥3) segments of P2 whose n centers of mass are to be steered in the opposite direction. The coordinates of these m + n centers constitute a phase space of 3(m + n) dimensions (3(m + n)D). All other degrees of freedom of the proteins, ligands, solvents, and solutes are freely subject to the stochastic dynamics of the all-atom model system. Conducting SMD along a line in this phase space, we obtain the 3(m + n)D PMF difference between two chosen states: one single state in the associated state ensemble and one single state in the dissociated state ensemble. This PMF difference is the first of four contributors to the protein-protein association energy. The second contributor is the 3(m + n - 1)D partial partition in the associated state accounting for the rotations and fluctuations of the (m + n - 1) centers while fixing one of the m + n centers of the P1-P2 complex. The two other contributors are the 3(m - 1)D partial partition of P1 and the 3(n - 1)D partial partition of P2 accounting for the rotations and fluctuations of their m - 1 or n - 1 centers while fixing one of the m/n centers of P1/P2 in the dissociated state. Each of these three partial partitions can be factored exactly into a 6D partial partition in multiplication with a remaining factor accounting for the small fluctuations while fixing three of the centers of P1, P2, or the P1-P2 complex, respectively. These small fluctuations can be well-approximated as Gaussian, and every 6D partition can be reduced in an exact manner to three problems of 1D sampling, counting the rotations and fluctuations around one of the centers as being fixed. We implement this hSMD approach to the Ras-RalGDS complex, choosing three centers on RalGDS and three on Ras (m = n = 3). At a computing cost of about 71.6 wall-clock hours using 400 computing cores in parallel, we obtained the association energy, -9.2 ± 1.9 kcal/mol on the basis of CHARMM 36 parameters, which well agrees with the experimental data, -8.4 ± 0.2 kcal/mol.
Trip attraction rates of shopping centers in Northern New Castle County, Delaware.
DOT National Transportation Integrated Search
2004-07-01
This report presents the trip attraction rates of the shopping centers in Northern New : Castle County in Delaware. The study aims to provide an alternative to ITE Trip : Generation Manual (1997) for computing the trip attraction of shopping centers ...
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
None
2018-02-07
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
Computer Bits: Child Care Center Management Software Buying Guide Update.
ERIC Educational Resources Information Center
Neugebauer, Roger
1987-01-01
Compares seven center management programs used for basic financial and data management tasks such as accounting, payroll and attendance records, and mailing lists. Describes three other specialized programs and gives guidelines for selecting the best software for a particular center. (NH)
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-09-30
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
ERIC Educational Resources Information Center
Grandgenett, Neal; And Others
McMillan Magnet Center is located in urban Omaha, Nebraska, and specializes in math, computers, and communications. Once a junior high school, it was converted to a magnet center for seventh and eighth graders in the 1983-84 school year as part of Omaha's voluntary desegregation plan. Now the ethnic makeup of the student population is about 50%…
Study of the Use of Time-Mean Vortices to Generate Lift for MAV Applications
2011-05-31
microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters (geometry, frequency, amplitude of oscillation, etc...issue involved. Towards this end, a suspended microplate was fabricated via MEMS technology and driven to in-plane resonance via Lorentz force...force to drive the suspended MEMS-based microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters
1984-06-29
sheet metal, machined and composite parts and assembling the components into final pruJucts o Planning, evaluating, testing, inspecting and...Research showed that current programs were pursuing the design and demonstration of integrated centers for sheet metal, machining and composite ...determine any metal parts required and to schedule these requirements from the machining center. Figure 3-33, Planned Composite Production, shows
3D Object Recognition: Symmetry and Virtual Views
1992-12-01
NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial
[Automated processing of data from the 1985 population and housing census].
Cholakov, S
1987-01-01
The author describes the method of automated data processing used in the 1985 census of Bulgaria. He notes that the computerization of the census involves decentralization and the use of regional computing centers as well as data processing at the Central Statistical Office's National Information Computer Center. Special attention is given to problems concerning the projection and programming of census data. (SUMMARY IN ENG AND RUS)
Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo
2014-04-21
Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.
Ames Research Center Publications: A Continuing Bibliography
NASA Technical Reports Server (NTRS)
1981-01-01
The Ames Research Center Publications: A Continuing Bibliography contains the research output of the Center indexed during 1981 in Scientific and Technical Aerospace Reports (STAR), Limited Scientific and Technical Aerospace Reports (LSTAR), International Aerospace Abstracts (IAA), and Computer Program Abstracts (CPA). This bibliography is published annually in an attempt to effect greater awareness and distribution of the Center's research output.
Handling of Varied Data Bases in an Information Center Environment.
ERIC Educational Resources Information Center
Williams, Martha E.
Information centers exist to provide information from machine-readable data bases to users in industry, universities and other organizations. The computer Search Center of the IIT Research Institute was designed with a number of variables and uncertainties before it. In this paper, the author discusses how the Center was designed to enable it to…
76 FR 59803 - Children's Online Privacy Protection Rule
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
...,'' covering the ``myriad of computer and telecommunications facilities, including equipment and operating..., Dir. and Professor of Computer Sci. and Pub. Affairs, Princeton Univ. (currently Chief Technologist at... data in the manner of a personal computer. See Electronic Privacy Information Center (``EPIC...
System and method for transferring telemetry data between a ground station and a control center
NASA Technical Reports Server (NTRS)
Ray, Timothy J. (Inventor); Ly, Vuong T. (Inventor)
2012-01-01
Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for coordinating communications between a ground station, a control center, and a spacecraft. The method receives a call to a simple, unified application programmer interface implementing communications protocols related to outer space, when instruction relates to receiving a command at the control center for the ground station generate an abstract message by agreeing upon a format for each type of abstract message with the ground station and using a set of message definitions to configure the command in the agreed upon format, encode the abstract message to generate an encoded message, and transfer the encoded message to the ground station, and perform similar actions when the instruction relates to receiving a second command as a second encoded message at the ground station from the control center and when the determined instruction type relates to transmitting information to the control center.
Quantum computing with defects.
Weber, J R; Koehl, W F; Varley, J B; Janotti, A; Buckley, B B; Van de Walle, C G; Awschalom, D D
2010-05-11
Identifying and designing physical systems for use as qubits, the basic units of quantum information, are critical steps in the development of a quantum computer. Among the possibilities in the solid state, a defect in diamond known as the nitrogen-vacancy (NV(-1)) center stands out for its robustness--its quantum state can be initialized, manipulated, and measured with high fidelity at room temperature. Here we describe how to systematically identify other deep center defects with similar quantum-mechanical properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate defect systems. To illustrate these points in detail, we compare electronic structure calculations of the NV(-1) center in diamond with those of several deep centers in 4H silicon carbide (SiC). We then discuss the proposed criteria for similar defects in other tetrahedrally coordinated semiconductors.
High Performance Computing Meets Energy Efficiency - Continuum Magazine |
NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data
Examining the Feasibility and Effect of Transitioning GED Tests to Computer
ERIC Educational Resources Information Center
Higgins, Jennifer; Patterson, Margaret Becker; Bozman, Martha; Katz, Michael
2010-01-01
This study examined the feasibility of administering GED Tests using a computer based testing system with embedded accessibility tools and the impact on test scores and test-taker experience when GED Tests are transitioned from paper to computer. Nineteen test centers across five states successfully installed the computer based testing program,…
75 FR 8311 - Privacy Act of 1974; Notice of a Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-24
...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, DoD. ACTION: Notice of a... hereby giving notice to the record subjects of a computer matching program between the Department of... conduct a computer matching program between the agencies. The purpose of this agreement is to verify an...
Employment Trends in Computer Occupations. Bulletin 2101.
ERIC Educational Resources Information Center
Howard, H. Philip; Rothstein, Debra E.
In 1980 1,455,000 persons worked in computer occupations. Two in five were systems analysts or programmers; one in five was a keypunch operator; one in 20 was a computer service technician; and more than one in three were computer and peripheral equipment operators. Employment was concentrated in major urban centers in four major industry…
Webinar: Delivering Transformational HPC Solutions to Industry
Streitz, Frederick
2018-01-16
Dr. Frederick Streitz, director of the High Performance Computing Innovation Center, discusses Lawrence Livermore National Laboratory computational capabilities and expertise available to industry in this webinar.
Ames Research Center publications: A continuing bibliography, 1980
NASA Technical Reports Server (NTRS)
1981-01-01
This bibliography lists formal NASA publications, journal articles, books, chapters of books, patents, contractor reports, and computer programs that were issued by Ames Research Center and indexed by Scientific and Technical Aerospace Reports, Limited Scientific and Technical Aerospace Reports, International Aerospace Abstracts, and Computer Program Abstracts in 1980. Citations are arranged by directorate, type of publication, and NASA accession numbers. Subject, personal author, corporate source, contract number, and report/accession number indexes are provided.
NASA Technical Reports Server (NTRS)
2001-01-01
Howmet Research Corporation was the first to commercialize an innovative cast metal technology developed at Auburn University, Auburn, Alabama. With funding assistance from NASA's Marshall Space Flight Center, Auburn University's Solidification Design Center (a NASA Commercial Space Center), developed accurate nickel-based superalloy data for casting molten metals. Through a contract agreement, Howmet used the data to develop computer model predictions of molten metals and molding materials in cast metal manufacturing. Howmet Metal Mold (HMM), part of Howmet Corporation Specialty Products, of Whitehall, Michigan, utilizes metal molds to manufacture net shape castings in various alloys and amorphous metal (metallic glass). By implementing the thermophysical property data from by Auburn researchers, Howmet employs its newly developed computer model predictions to offer customers high-quality, low-cost, products with significantly improved mechanical properties. Components fabricated with this new process replace components originally made from forgings or billet. Compared with products manufactured through traditional casting methods, Howmet's computer-modeled castings come out on top.
Touch-screen computerized education for patients with brain injuries.
Patyk, M; Gaynor, S; Kelly, J; Ott, V
1998-01-01
The use of computer technology for patient education has increased in recent years. This article describes a study that measures the attitudes and perceptions of healthcare professionals and laypeople regarding the effectiveness of a multimedia computer, the Brain Injury Resource Center (BIRC), as an educational tool. The study focused on three major themes: (a) usefulness of the information presented, (b) effectiveness of the multimedia touch-screen computer methodology, and (c) the appropriate time for making this resource available. This prospective study, conducted in an acute care medical center, obtained healthcare professionals' evaluations using a written survey and responses from patients with brain injury and their families during interviews. The findings have yielded excellent ratings as to the ease of understanding and usefulness of the BIRC. By using sight, sound, and touch, such a multimedia learning center has the potential to simplify patient and family education.
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Feiler, Charles E. (Compiler)
1993-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) was established at the NASA Lewis Research Center in Cleveland, Ohio to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1992 are described.
Best Practice Guidelines for Computer Technology in the Montessori Early Childhood Classroom.
ERIC Educational Resources Information Center
Montminy, Peter
1999-01-01
Presents a draft for a principle-centered position statement of a Montessori early childhood program in central Pennsylvania, on the pros and cons of computer use in a Montessori 3-6 classroom. Includes computer software rating form. (Author/KB)
Computer Center: It's Time to Take Inventory.
ERIC Educational Resources Information Center
Spain, James D.
1984-01-01
Describes typical instructional applications of computers. Areas considered include: (1) instructional simulations and animations; (2) data analysis; (3) drill and practice; (4) student evaluation; (5) development of computer models and simulations; (6) biometrics or biostatistics; and (7) direct data acquisition and analysis. (JN)
Center for Computational Structures Technology
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Perry, Ferman W.
1995-01-01
The Center for Computational Structures Technology (CST) is intended to serve as a focal point for the diverse CST research activities. The CST activities include the use of numerical simulation and artificial intelligence methods in modeling, analysis, sensitivity studies, and optimization of flight-vehicle structures. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The key elements of the Center are: (1) conducting innovative research on advanced topics of CST; (2) acting as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); (3) strong collaboration with NASA scientists and researchers from universities and other government laboratories; and (4) rapid dissemination of CST to industry, through integration of industrial personnel into the ongoing research efforts.
NASA Space Engineering Research Center for VLSI systems design
NASA Technical Reports Server (NTRS)
1991-01-01
This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.
A Comprehensive Computer Package for Ambulatory Surgical Facilities
Kessler, Robert R.
1980-01-01
Ambulatory surgical centers are a cost effective alternative to hospital surgery. Their increasing popularity has contributed to heavy case loads, an accumulation of vast amounts of medical and financial data and economic pressures to maintain a tight control over “cash flow”. Computerization is now a necessity to aid ambulatory surgical centers to maintain their competitive edge. An on-line system is especially necessary as it allows interactive scheduling of surgical cases, immediate access to financial data and rapid gathering of medical and statistical information. This paper describes the significant features of the computer package in use at the Salt Lake Surgical Center, which processes 500 cases per month.
Integration of the Chinese HPC Grid in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Filipčič, A.;
2017-10-01
Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.
New frontiers in design synthesis
NASA Technical Reports Server (NTRS)
Goldin, D. S.; Venneri, S. L.; Noor, A. K.
1999-01-01
The Intelligent Synthesis Environment (ISE), which is one of the major strategic technologies under development at NASA centers and the University of Virginia, is described. One of the major objectives of ISE is to significantly enhance the rapid creation of innovative affordable products and missions. ISE uses a synergistic combination of leading-edge technologies, including high performance computing, high capacity communications and networking, human-centered computing, knowledge-based engineering, computational intelligence, virtual product development, and product information management. The environment will link scientists, design teams, manufacturers, suppliers, and consultants who participate in the mission synthesis as well as in the creation and operation of the aerospace system. It will radically advance the process by which complex science missions are synthesized, and high-tech engineering Systems are designed, manufactured and operated. The five major components critical to ISE are human-centered computing, infrastructure for distributed collaboration, rapid synthesis and simulation tools, life cycle integration and validation, and cultural change in both the engineering and science creative process. The five components and their subelements are described. Related U.S. government programs are outlined and the future impact of ISE on engineering research and education is discussed.
Networking at NASA. Johnson Space Center
NASA Technical Reports Server (NTRS)
Garman, John R.
1991-01-01
A series of viewgraphs on computer networks at the Johnson Space Center (JSC) are given. Topics covered include information resource management (IRM) at JSC, the IRM budget by NASA center, networks evolution, networking as a strategic tool, the Information Services Directorate charter, and SSC network requirements, challenges, and status.
Hammond Photo of Steven Hammond Steve Hammond Center Director II-Technical Steven.Hammond@nrel.gov | 303-275-4121 Steve Hammond is director of the Computational Science Center at the National Renewable includes leading NREL's efforts in energy efficient data centers. Prior to NREL, Steve managed the
NASA Technical Reports Server (NTRS)
Tompkins, F. G.
1983-01-01
The report presents guidance for the NASA Computer Security Program Manager and the NASA Center Computer Security Officials as they develop training requirements and implement computer security training programs. NASA audiences are categorized based on the computer security knowledge required to accomplish identified job functions. Training requirements, in terms of training subject areas, are presented for both computer security program management personnel and computer resource providers and users. Sources of computer security training are identified.
Center for Aeronautics and Space Information Sciences
NASA Technical Reports Server (NTRS)
Flynn, Michael J.
1992-01-01
This report summarizes the research done during 1991/92 under the Center for Aeronautics and Space Information Science (CASIS) program. The topics covered are computer architecture, networking, and neural nets.
Accelerating MP2C dispersion corrections for dimers and molecular crystals
NASA Astrophysics Data System (ADS)
Huang, Yuanhang; Shao, Yihan; Beran, Gregory J. O.
2013-06-01
The MP2C dispersion correction of Pitonak and Hesselmann [J. Chem. Theory Comput. 6, 168 (2010)], 10.1021/ct9005882 substantially improves the performance of second-order Møller-Plesset perturbation theory for non-covalent interactions, albeit with non-trivial computational cost. Here, the MP2C correction is computed in a monomer-centered basis instead of a dimer-centered one. When applied to a single dimer MP2 calculation, this change accelerates the MP2C dispersion correction several-fold while introducing only trivial new errors. More significantly, in the context of fragment-based molecular crystal studies, combination of the new monomer basis algorithm and the periodic symmetry of the crystal reduces the cost of computing the dispersion correction by two orders of magnitude. This speed-up reduces the MP2C dispersion correction calculation from a significant computational expense to a negligible one in crystals like aspirin or oxalyl dihydrazide, without compromising accuracy.
NASA Technical Reports Server (NTRS)
Davis, Bruce E.; Elliot, Gregory
1989-01-01
Jackson State University recently established the Center for Spatial Data Research and Applications, a Geographical Information System (GIS) and remote sensing laboratory. Taking advantage of new technologies and new directions in the spatial (geographic) sciences, JSU is building a Center of Excellence in Spatial Data Management. New opportunities for research, applications, and employment are emerging. GIS requires fundamental shifts and new demands in traditional computer science and geographic training. The Center is not merely another computer lab but is one setting the pace in a new applied frontier. GIS and its associated technologies are discussed. The Center's facilities are described. An ARC/INFO GIS runs on a Vax mainframe, with numerous workstations. Image processing packages include ELAS, LIPS, VICAR, and ERDAS. A host of hardware and software peripheral are used in support. Numerous projects are underway, such as the construction of a Gulf of Mexico environmental data base, development of AI in image processing, a land use dynamics study of metropolitan Jackson, and others. A new academic interdisciplinary program in Spatial Data Management is under development, combining courses in Geography and Computer Science. The broad range of JSU's GIS and remote sensing activities is addressed. The impacts on changing paradigms in the university and in the professional world conclude the discussion.
Voltage profile program for the Kennedy Space Center electric power distribution system
NASA Technical Reports Server (NTRS)
1976-01-01
The Kennedy Space Center voltage profile program computes voltages at all busses greater than 1 Kv in the network under various conditions of load. The computation is based upon power flow principles and utilizes a Newton-Raphson iterative load flow algorithm. Power flow conditions throughout the network are also provided. The computer program is designed for both steady state and transient operation. In the steady state mode, automatic tap changing of primary distribution transformers is incorporated. Under transient conditions, such as motor starts etc., it is assumed that tap changing is not accomplished so that transformer secondary voltage is allowed to sag.
Developmental Stages in School Computer Use: Neither Marx Nor Piaget.
ERIC Educational Resources Information Center
Lengel, James G.
Karl Marx's theory of stages can be applied to computer use in the schools. The first stage, the P Stage, comprises the entry of the computer into the school. Computer use at this stage is personal and tends to center around one personality. Social studies teachers are seldom among this select few. The second stage of computer use, the D Stage, is…
Nontrivial, Nonintelligent, Computer-Based Learning.
ERIC Educational Resources Information Center
Bork, Alfred
1987-01-01
This paper describes three interactive computer programs used with personal computers to present science learning modules for all ages. Developed by groups of teachers at the Educational Technology Center at the University of California, Irvine, these instructional materials do not use the techniques of contemporary artificial intelligence. (GDC)
Advanced laptop and small personal computer technology
NASA Technical Reports Server (NTRS)
Johnson, Roger L.
1991-01-01
Advanced laptop and small personal computer technology is presented in the form of the viewgraphs. The following areas of hand carried computers and mobile workstation technology are covered: background, applications, high end products, technology trends, requirements for the Control Center application, and recommendations for the future.
Argonne's Magellan Cloud Computing Research Project
Beckman, Pete
2017-12-11
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html
Argonne's Magellan Cloud Computing Research Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, Pete
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html
Computer Operating System Maintenance.
1982-06-01
FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access
Computational Nanotechnology Molecular Electronics, Materials and Machines
NASA Technical Reports Server (NTRS)
Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
This presentation covers research being performed on computational nanotechnology, carbon nanotubes and fullerenes at the NASA Ames Research Center. Topics cover include: nanomechanics of nanomaterials, nanotubes and composite materials, molecular electronics with nanotube junctions, kinky chemistry, and nanotechnology for solid-state quantum computers using fullerenes.
Performance assessment of KORAT-3D on the ANL IBM-SP computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.
1999-09-01
The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less
User-Centered Design of Online Learning Communities
ERIC Educational Resources Information Center
Lambropoulos, Niki, Ed.; Zaphiris, Panayiotis, Ed.
2007-01-01
User-centered design (UCD) is gaining popularity in both the educational and business sectors. This is due to the fact that UCD sheds light on the entire process of analyzing, planning, designing, developing, using, evaluating, and maintaining computer-based learning. "User-Centered Design of Online Learning Communities" explains how…
Planning for the Automation of School Library Media Centers.
ERIC Educational Resources Information Center
Caffarella, Edward P.
1996-01-01
Geared for school library media specialists whose centers are in the early stages of automation or conversion to a new system, this article focuses on major components of media center automation: circulation control; online public access catalogs; machine readable cataloging; retrospective conversion of print catalog cards; and computer networks…
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2014 CFR
2014-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Community corrections center good time...
28 CFR 523.13 - Community corrections center good time.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Community corrections center good time...
ERIC Educational Resources Information Center
Selig, Judith A.; And Others
This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent information retrieval and computer-assisted…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hules, John
This 1998 annual report from the National Scientific Energy Research Computing Center (NERSC) presents the year in review of the following categories: Computational Science; Computer Science and Applied Mathematics; and Systems and Services. Also presented are science highlights in the following categories: Basic Energy Sciences; Biological and Environmental Research; Fusion Energy Sciences; High Energy and Nuclear Physics; and Advanced Scientific Computing Research and Other Projects.
76 FR 14669 - Privacy Act of 1974; CMS Computer Match No. 2011-02; HHS Computer Match No. 1007
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-17
... (CMS); and Department of Defense (DoD), Manpower Data Center (DMDC), Defense Enrollment and Eligibility... the results of the computer match and provide the information to TMA for use in its matching program... under TRICARE. DEERS will receive the results of the computer match and provide the information provided...
76 FR 50460 - Privacy Act of 1974; Notice of a Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-15
...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD). ACTION: Notice of a Computer Matching Program. SUMMARY: Subsection (e)(12) of the Privacy Act of 1974, as amended, (5 U.S.C. 552a) requires agencies to publish advance notice of any proposed or revised computer...
76 FR 77811 - Privacy Act of 1974; Notice of a Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-14
...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD). ACTION: Notice of a Computer Matching Program. SUMMARY: Subsection (e)(12) of the Privacy Act of 1974, as amended, (5 U.S.C. 552a) requires agencies to publish advance notice of any proposed or revised computer...
2009-09-15
Obama Administration launches Cloud Computing Initiative at Ames Research Center with left to right; S. Pete Worden, Center Director, Lori Garver, NASA Deputy Administrator, Vivek Kundra, White House Chief Federal Information Officer
77 FR 11139 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-24
...: Center for Scientific Review Special Emphasis Panel; ``Genetics and Epigenetics of Disease.'' Date: March... Scientific Review Special Emphasis Panel; Small Business: Cell, Computational, and Molecular Biology. Date...
Biomedical Computing Technology Information Center: introduction and report of early progress
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maskewitz, B.F.; Henne, R.L.; McClain, W.J.
1976-01-01
In July 1975, the Biomedical Computing Technology Information Center (BCTIC) was established by the Division of Biomedical and Environmental Research of the U. S. Energy Research and Development Administration (ERDA) at the Oak Ridge National Laboratory. BCTIC collects, organizes, evaluates, and disseminates information on computing technology pertinent to biomedicine, providing needed routes of communication between installations and serving as a clearinghouse for the exchange of biomedical computing software, data, and interface designs. This paper presents BCTIC's functions and early progress to the MUMPS Users' Group in order to stimulate further discussion and cooperation between the two organizations. (BCTIC services aremore » available to its sponsors and their contractors and to any individual/group willing to participate in mutual exchange.) 1 figure.« less
NASA Astrophysics Data System (ADS)
Maire, Pierre-Henri; Abgrall, Rémi; Breil, Jérôme; Loubère, Raphaël; Rebourcet, Bernard
2013-02-01
In this paper, we describe a cell-centered Lagrangian scheme devoted to the numerical simulation of solid dynamics on two-dimensional unstructured grids in planar geometry. This numerical method, utilizes the classical elastic-perfectly plastic material model initially proposed by Wilkins [M.L. Wilkins, Calculation of elastic-plastic flow, Meth. Comput. Phys. (1964)]. In this model, the Cauchy stress tensor is decomposed into the sum of its deviatoric part and the thermodynamic pressure which is defined by means of an equation of state. Regarding the deviatoric stress, its time evolution is governed by a classical constitutive law for isotropic material. The plasticity model employs the von Mises yield criterion and is implemented by means of the radial return algorithm. The numerical scheme relies on a finite volume cell-centered method wherein numerical fluxes are expressed in terms of sub-cell force. The generic form of the sub-cell force is obtained by requiring the scheme to satisfy a semi-discrete dissipation inequality. Sub-cell force and nodal velocity to move the grid are computed consistently with cell volume variation by means of a node-centered solver, which results from total energy conservation. The nominally second-order extension is achieved by developing a two-dimensional extension in the Lagrangian framework of the Generalized Riemann Problem methodology, introduced by Ben-Artzi and Falcovitz [M. Ben-Artzi, J. Falcovitz, Generalized Riemann Problems in Computational Fluid Dynamics, Cambridge Monogr. Appl. Comput. Math. (2003)]. Finally, the robustness and the accuracy of the numerical scheme are assessed through the computation of several test cases.
Yahoo! Compute Coop (YCC). A Next-Generation Passive Cooling Design for Data Centers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robison, AD; Page, Christina; Lytle, Bob
The purpose of the Yahoo! Compute Coop (YCC) project is to research, design, build and implement a greenfield "efficient data factory" and to specifically demonstrate that the YCC concept is feasible for large facilities housing tens of thousands of heat-producing computing servers. The project scope for the Yahoo! Compute Coop technology includes: - Analyzing and implementing ways in which to drastically decrease energy consumption and waste output. - Analyzing the laws of thermodynamics and implementing naturally occurring environmental effects in order to maximize the "free-cooling" for large data center facilities. "Free cooling" is the direct usage of outside air tomore » cool the servers vs. traditional "mechanical cooling" which is supplied by chillers or other Dx units. - Redesigning and simplifying building materials and methods. - Shortening and simplifying build-to-operate schedules while at the same time reducing initial build and operating costs. Selected for its favorable climate, the greenfield project site is located in Lockport, NY. Construction on the 9.0 MW critical load data center facility began in May 2009, with the fully operational facility deployed in September 2010. The relatively low initial build cost, compatibility with current server and network models, and the efficient use of power and water are all key features that make it a highly compatible and globally implementable design innovation for the data center industry. Yahoo! Compute Coop technology is designed to achieve 99.98% uptime availability. This integrated building design allows for free cooling 99% of the year via the building's unique shape and orientation, as well as server physical configuration.« less
Jones, Josette; Harris, Marcelline; Bagley-Thompson, Cheryl; Root, Jane
2003-01-01
This poster describes the development of user-centered interfaces in order to extend the functionality of the Virginia Henderson International Nursing Library (VHINL) from library to web based portal to nursing knowledge resources. The existing knowledge structure and computational models are revised and made complementary. Nurses' search behavior is captured and analyzed, and the resulting search models are mapped to the revised knowledge structure and computational model.
Human Centered Design and Development for NASA's MerBoard
NASA Technical Reports Server (NTRS)
Trimble, Jay
2003-01-01
This viewgraph presentation provides an overview of the design and development process for NASA's MerBoard. These devices are large interactive display screens which can be shown on the user's computer, which will allow scientists in many locations to interpret and evaluate mission data in real-time. These tools are scheduled to be used during the 2003 Mars Exploration Rover (MER) expeditions. Topics covered include: mission overview, Mer Human Centered Computers, FIDO 2001 observations and MerBoard prototypes.
Payload Operations Control Center (POCC). [spacelab flight operations
NASA Technical Reports Server (NTRS)
Shipman, D. L.; Noneman, S. R.; Terry, E. S.
1981-01-01
The Spacelab payload operations control center (POCC) timeline analysis program which is used to provide POCC activity and resource information as a function of mission time is described. This program is fully automated and interactive, and is equipped with tutorial displays. The tutorial displays are sufficiently detailed for use by a program analyst having no computer experience. The POCC timeline analysis program is designed to operate on the VAX/VMS version V2.1 computer system.
Development of computational methods for unsteady aerodynamics at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Yates, E. Carson, Jr.; Whitlow, Woodrow, Jr.
1987-01-01
The current scope, recent progress, and plans for research and development of computational methods for unsteady aerodynamics at the NASA Langley Research Center are reviewed. Both integral equations and finite difference methods for inviscid and viscous flows are discussed. Although the great bulk of the effort has focused on finite difference solution of the transonic small perturbation equation, the integral equation program is given primary emphasis here because it is less well known.
Development of computational methods for unsteady aerodynamics at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Yates, E. Carson, Jr.; Whitlow, Woodrow, Jr.
1987-01-01
The current scope, recent progress, and plans for research and development of computational methods for unsteady aerodynamics at the NASA Langley Research Center are reviewed. Both integral-equations and finite-difference method for inviscid and viscous flows are discussed. Although the great bulk of the effort has focused on finite-difference solution of the transonic small-perturbation equation, the integral-equation program is given primary emphasis here because it is less well known.
Human Centered Computing for Mars Exploration
NASA Technical Reports Server (NTRS)
Trimble, Jay
2005-01-01
The science objectives are to determine the aqueous, climatic, and geologic history of a site on Mars where conditions may have been favorable to the preservation of evidence of prebiotic or biotic processes. Human Centered Computing is a development process that starts with users and their needs, rather than with technology. The goal is a system design that serves the user, where the technology fits the task and the complexity is that of the task not of the tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fellinger, Michael R.; Hector, Jr., Louis G.; Trinkle, Dallas R.
In this study, we compute changes in the lattice parameters and elastic stiffness coefficients C ij of body-centered tetragonal (bct) Fe due to Al, B, C, Cu, Mn, Si, and N solutes. Solute strain misfit tensors determine changes in the lattice parameters as well as strain contributions to the changes in the C ij. We also compute chemical contributions to the changes in the C ij, and show that the sum of the strain and chemical contributions agree with more computationally expensive direct calculations that simultaneously incorporate both contributions. Octahedral interstitial solutes, with C being the most important addition inmore » steels, must be present to stabilize the bct phase over the body-centered cubic phase. We therefore compute the effects of interactions between interstitial C solutes and substitutional solutes on the bct lattice parameters and C ij for all possible solute configurations in the dilute limit, and thermally average the results to obtain effective changes in properties due to each solute. Finally, the computed data can be used to estimate solute-induced changes in mechanical properties such as strength and ductility, and can be directly incorporated into mesoscale simulations of multiphase steels to model solute effects on the bct martensite phase.« less
... Computed tomography scan - heart; Calcium scoring; Multi-detector CT scan - heart; Electron beam computed tomography - heart; Agatston ... table that slides into the center of the CT scanner. You will lie on your back with ...
78 FR 68462 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-14
... personal privacy. Name of Committee: Center for Scientific Review Special Emphasis Panel; Brain Injury and... Methodologies Integrated Review Group; Biomedical Computing and Health Informatics Study Section. Date: December...
IUWare and Computing Tools: Indiana University's Approach to Low-Cost Software.
ERIC Educational Resources Information Center
Sheehan, Mark C.; Williams, James G.
1987-01-01
Describes strategies for providing low-cost microcomputer-based software for classroom use on college campuses. Highlights include descriptions of the software (IUWare and Computing Tools); computing center support; license policies; documentation; promotion; distribution; staff, faculty, and user training; problems; and future plans. (LRW)
Adaptive Mesh Experiments for Hyperbolic Partial Differential Equations
1990-02-01
JOSEPH E. FLAHERTY FEBRUARY 1990 US ARMY ARMAMENT RESEARCH , ~ DEVELOPMENT AND ENGINEERlING CENTER CLOSE COMBAT ARMAMENTS CENTER BENET LABORATORIES...NY 12189-4050 If. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE U.S. Army ARDEC February 1990 Close Combat Armaments Center 13. NUMBER OF...Flaherty Department of Computer Science Rensselaer Polytechnic Institute Troy, NY 12180-3590 and U.S. Army ARDEC Close Combat Armaments Center Benet
Commercial Pilot Knowledge Test Guide
DOT National Transportation Integrated Search
1995-01-01
The FAA has available hundreds of computer testing centers nationwide. These testing centers offer the full range of airman knowledge tests including military competence, instrument foreign pilot, and pilot examiner predesignated tests. Refer to appe...
2009-09-15
Obama Administration launches Cloud Computing Initiative at Ames Research Center. Vivek Kundra, White House Chief Federal Information Officer (right) and Lori Garver, NASA Deputy Administrator (left) get a tour & demo NASAS Supercomputing Center Hyperwall.
76 FR 24036 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-29
... personal privacy. Name of Committee: Center for Scientific Review Special Emphasis Panel, Brain Disorders... Integrated Review Group, Biomedical Computing and Health Informatics Study Section. Date: June 7-8, 2011...
Instrument Rating Knowledge Test Guide
DOT National Transportation Integrated Search
1995-01-01
The FAA has available hundreds of computer testing centers nationwide. These testing centers offer the full range of airman knowledge tests including military competence, instrument foreign pilot, and pilot examiner predesignated tests. Refer to appe...
NASA Technical Reports Server (NTRS)
Paluzzi, Peter; Miller, Rosalind; Kurihara, West; Eskey, Megan
1998-01-01
Over the past several months, major industry vendors have made a business case for the network computer as a win-win solution toward lowering total cost of ownership. This report provides results from Phase I of the Ames Research Center network computer evaluation project. It identifies factors to be considered for determining cost of ownership; further, it examines where, when, and how network computer technology might fit in NASA's desktop computing architecture.
Displaying Computer Simulations Of Physical Phenomena
NASA Technical Reports Server (NTRS)
Watson, Val
1991-01-01
Paper discusses computer simulation as means of experiencing and learning to understand physical phenomena. Covers both present simulation capabilities and major advances expected in near future. Visual, aural, tactile, and kinesthetic effects used to teach such physical sciences as dynamics of fluids. Recommends classrooms in universities, government, and industry be linked to advanced computing centers so computer simulations integrated into education process.
ERIC Educational Resources Information Center
Smith, Peter, Ed.
Papers from a conference on small college computing issues are: "An On-line Microcomputer Course for Pre-service Teachers" (Mary K. Abkemeier); "The Mathematics and Computer Science Learning Center (MLC)" (Solomon T. Abraham); "Multimedia for the Non-Computer Science Faculty Member" (Stephen T. Anderson, Sr.); "Achieving Continuous Improvement:…
ERIC Educational Resources Information Center
Sam, Hong Kian; Othman, Abang Ekhsan Abang; Nordin, Zaimuarifuddin Shukri
2005-01-01
Eighty-one female and sixty-seven male undergraduates at a Malaysian university, from seven faculties and a Center for Language Studies completed a Computer Self-Efficacy Scale, Computer Anxiety Scale, and an Attitudes toward the Internet Scale and give information about their use of the Internet. This survey research investigated undergraduates'…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-03
... anchors, both as centers for digital literacy and as hubs for access to public computers. While their... expansion of computer labs, and facilitated deployment of new educational applications that would not have... computer fees to help defray the cost of computers or training fees to help cover the cost of training...
Science Photo of person viewing 3D visualization of a wind turbine The NREL Computational Science challenges in fields ranging from condensed matter physics and nonlinear dynamics to computational fluid dynamics. NREL is also home to the most energy-efficient data center in the world, featuring Peregrine-the
Abstract: Researchers at the EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The intent...
ERIC Educational Resources Information Center
Nikirk, Martin
2006-01-01
This article discusses a computer game design and animation pilot at Washington County Technical High School as part of the advanced computer applications completer program. The focus of the instructional program is to teach students the 16 components of computer game design through a team-centered, problem-solving instructional format. Among…
Some Measurement and Instruction Related Considerations Regarding Computer Assisted Testing.
ERIC Educational Resources Information Center
Oosterhof, Albert C.; Salisbury, David F.
The Assessment Resource Center (ARC) at Florida State University provides computer assisted testing (CAT) for approximately 4,000 students each term. Computer capabilities permit a small proctoring staff to administer tests simultaneously to large numbers of students. Programs provide immediate feedback for students and generate a variety of…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-07
...), Defense Manpower Data Center (DMDC) and the Office of the Assistant Secretary of Defense (Health Affairs.../TRICARE. DMDC will receive the results of the computer match and provide the information to TMA for use in...
Introduction to the theory of machines and languages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weidhaas, P. P.
1976-04-01
This text is intended to be an elementary ''guided tour'' through some basic concepts of modern computer science. Various models of computing machines and formal languages are studied in detail. Discussions center around questions such as, ''What is the scope of problems that can or cannot be solved by computers.''
Management Needs for Computer Support.
ERIC Educational Resources Information Center
Irby, Alice J.
University management has many and varied needs for effective computer services in support of their processing and information functions. The challenge for the computer center managers is to better understand these needs and assist in the development of effective and timely solutions. Management needs can range from accounting and payroll to…
Computer Training for Early Childhood Educators.
ERIC Educational Resources Information Center
Specht, Jacqueline; Wood, Eileen; Willoughby, Teena
Recent research in early childhood education (ECE) centers suggests that some teacher characteristics are not at a level that would support computer learning opportunities for children. This study identified areas of support required by teachers to provide a smooth introduction of the computer into the early childhood education classroom.…
Administration of Computer Resources.
ERIC Educational Resources Information Center
Franklin, Gene F.
Computing at Stanford University has, until recently, been performed at one of five facilities. The Stanford hospital operates an IBM 370/135 mainly for administrative use. The university business office has an IBM 370/145 for its administrative needs and support of the medical clinic. Under the supervision of the Stanford Computation Center are…
Computer Conferencing: Distance Learning That Works.
ERIC Educational Resources Information Center
Norton, Robert E.; Stammen, Ronald M.
This paper reports on a computer conferencing pilot project initiated by the Consortium for the Development of Professional Materials for Vocational Education and developed at the Center on Education and Training for Employment at Ohio State University. The report provides an introduction to computer conferencing and describes the stages of the…
75 FR 54162 - Privacy Act of 1974
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-03
... Program A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), amended the... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare and Medicaid Services [CMS Computer Match No. 2010-01; HHS Computer Match No. 1006] Privacy Act of 1974 AGENCY: Department of Health and...
Presentation Software and the Single Computer.
ERIC Educational Resources Information Center
Brown, Cindy A.
1998-01-01
Shows how the "Kid Pix" software and a single multimedia computer can aid classroom instruction for kindergarten through second grade. Topics include using the computer as a learning center for small groups of students; making a "Kid Pix" slide show; using it as an electronic chalkboard; and creating curriculum-related…
78 FR 63196 - Privacy Act System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-23
... Technology Center (ITC) staff and contractors, who maintain the FCC's computer network. Other FCC employees... and Offices (B/ Os); 2. Electronic data, records, and files that are stored in the FCC's computer.... Access to the FACA electronic records, files, and data, which are housed in the FCC's computer network...
Should Professors Ban Laptops?
ERIC Educational Resources Information Center
Carter, Susan Payne; Greenberg, Kyle; Walker, Michael S.
2017-01-01
Laptop computers have become commonplace in K-12 and college classrooms. With that, educators now face a critical decision. Should they embrace computers and put technology at the center of their instruction? Should they allow students to decide for themselves whether to use computers during class? Or should they ban screens altogether and embrace…
Computational Nanotechnology at NASA Ames Research Center, 1996
NASA Technical Reports Server (NTRS)
Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.
Applications of Computer Technology in Complex Craniofacial Reconstruction.
Day, Kristopher M; Gabrick, Kyle S; Sargent, Larry A
2018-03-01
To demonstrate our use of advanced 3-dimensional (3D) computer technology in the analysis, virtual surgical planning (VSP), 3D modeling (3DM), and treatment of complex congenital and acquired craniofacial deformities. We present a series of craniofacial defects treated at a tertiary craniofacial referral center utilizing state-of-the-art 3D computer technology. All patients treated at our center using computer-assisted VSP, prefabricated custom-designed 3DMs, and/or 3D printed custom implants (3DPCI) in the reconstruction of craniofacial defects were included in this analysis. We describe the use of 3D computer technology to precisely analyze, plan, and reconstruct 31 craniofacial deformities/syndromes caused by: Pierre-Robin (7), Treacher Collins (5), Apert's (2), Pfeiffer (2), Crouzon (1) Syndromes, craniosynostosis (6), hemifacial microsomia (2), micrognathia (2), multiple facial clefts (1), and trauma (3). In select cases where the available bone was insufficient for skeletal reconstruction, 3DPCIs were fabricated using 3D printing. We used VSP in 30, 3DMs in all 31, distraction osteogenesis in 16, and 3DPCIs in 13 cases. Utilizing these technologies, the above complex craniofacial defects were corrected without significant complications and with excellent aesthetic results. Modern 3D technology allows the surgeon to better analyze complex craniofacial deformities, precisely plan surgical correction with computer simulation of results, customize osteotomies, plan distractions, and print 3DPCI, as needed. The use of advanced 3D computer technology can be applied safely and potentially improve aesthetic and functional outcomes after complex craniofacial reconstruction. These techniques warrant further study and may be reproducible in various centers of care.
For operation of the Computer Software Management and Information Center (COSMIC)
NASA Technical Reports Server (NTRS)
Carmon, J. L.
1983-01-01
Computer programs for relational information management data base systems, spherical roller bearing analysis, a generalized pseudoinverse of a rectangular matrix, and software design and documentation language are summarized.
NREL Evaluates Aquarius Liquid-Cooled High-Performance Computing Technology
HPC and influence the modern data center designer towards adoption of liquid cooling. Our shared technology. Aquila and Sandia chose NREL's HPC Data Center for the initial installation and evaluation because the data center is configured for liquid cooling, along with the required instrumentation to
NASA propagation information center
NASA Technical Reports Server (NTRS)
Smith, Ernest K.; Flock, Warren L.
1990-01-01
The NASA Propagation Information Center became formally operational in July 1988. It is located in the Department of Electrical and Computer Engineering of the University of Colorado at Boulder. The center is several things: a communications medium for the propagation with the outside world, a mechanism for internal communication within the program, and an aid to management.
ERIC Educational Resources Information Center
Dewey, Patrick R.
1986-01-01
The history of patron access microcomputers in libraries is described as carrying on a tradition that information and computer power should be shared. Questions that all types of libraries need to ask in planning microcomputer centers are considered and several model centers are described. (EM)
NASA Propagation Information Center
NASA Technical Reports Server (NTRS)
Smith, Ernest K.; Flock, Warren L.
1989-01-01
The NASA Propagation Information Center became formally operational in July 1988. It is located in the Department of Electrical and Computer Engineering of the University of Colorado at Boulder. The Center is several things: a communications medium for the propagation with the outside world, a mechanism for internal communication within the program, and an aid to management.
Center for space microelectronics technology
NASA Technical Reports Server (NTRS)
1993-01-01
The 1992 Technical Report of the Jet Propulsion Laboratory Center for Space Microelectronics Technology summarizes the technical accomplishments, publications, presentations, and patents of the center during the past year. The report lists 187 publications, 253 presentations, and 111 new technology reports and patents in the areas of solid-state devices, photonics, advanced computing, and custom microcircuits.
14 CFR 135.63 - Recordkeeping requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... of gravity limits; (5) The center of gravity of the loaded aircraft, except that the actual center of gravity need not be computed if the aircraft is loaded according to a loading schedule or other approved method that ensures that the center of gravity of the loaded aircraft is within approved limits. In those...
14 CFR 135.63 - Recordkeeping requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... of gravity limits; (5) The center of gravity of the loaded aircraft, except that the actual center of gravity need not be computed if the aircraft is loaded according to a loading schedule or other approved method that ensures that the center of gravity of the loaded aircraft is within approved limits. In those...
14 CFR 135.63 - Recordkeeping requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... of gravity limits; (5) The center of gravity of the loaded aircraft, except that the actual center of gravity need not be computed if the aircraft is loaded according to a loading schedule or other approved method that ensures that the center of gravity of the loaded aircraft is within approved limits. In those...
14 CFR 135.63 - Recordkeeping requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... of gravity limits; (5) The center of gravity of the loaded aircraft, except that the actual center of gravity need not be computed if the aircraft is loaded according to a loading schedule or other approved method that ensures that the center of gravity of the loaded aircraft is within approved limits. In those...
14 CFR 135.63 - Recordkeeping requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... of gravity limits; (5) The center of gravity of the loaded aircraft, except that the actual center of gravity need not be computed if the aircraft is loaded according to a loading schedule or other approved method that ensures that the center of gravity of the loaded aircraft is within approved limits. In those...
Lessons Learned in Starting and Running a Neighborhood Networks Center.
ERIC Educational Resources Information Center
Department of Housing and Urban Development, Washington, DC.
This guide shares information about setting up and operating Neighborhood Networks centers. (These centers operate in Department of Housing and Urban Development-assisted or -insured housing nationwide to help low-income people boost their basic skills and find good jobs, learn to use computers and the Internet, run businesses, improve their…
[AERA. Dream machines and computing practices at the Mathematical Center].
Alberts, Gerard; De Beer, Huub T
2008-01-01
Dream machines may be just as effective as the ones materialised. Their symbolic thrust can be quite powerful. The Amsterdam 'Mathematisch Centrum' (Mathematical Center), founded February 11, 1946, created a Computing Department in an effort to realise its goal of serving society. When Aad van Wijngaarden was appointed as head of the Computing Department, however, he claimed space for scientific research and computer construction, next to computing as a service. Still, the computing service following the five stage style of Hartree's numerical analysis remained a dominant characteristic of the work of the Computing Department. The high level of ambition held by Aad van Wijngaarden lead to ever renewed projections of big automatic computers, symbolised by the never-built AERA. Even a machine that was actually constructed, the ARRA which followed A.D. Booth's design of the ARC, never made it into real operation. It did serve Van Wijngaarden to bluff his way into the computer age by midsummer 1952. Not until January 1954 did the computing department have a working stored program computer, which for reasons of policy went under the same name: ARRA. After just one other machine, the ARMAC, had been produced, a separate company, Electrologica, was set up for the manufacture of computers, which produced the rather successful X1 computer. The combination of ambition and absence of a working machine lead to a high level of work on programming, way beyond the usual ideas of libraries of subroutines. Edsger W. Dijkstra in particular led the way to an emphasis on the duties of the programmer within the pattern of numerical analysis. Programs generating programs, known elsewhere as autocoding systems, were at the 'Mathematisch Centrum' called 'superprograms'. Practical examples were usually called a 'complex', in Dutch, where in English one might say 'system'. Historically, this is where software begins. Dekker's matrix complex, Dijkstra's interrupt system, Dijkstra and Zonneveld's ALGOL compiler--which for housekeeping contained 'the complex'--were actual examples of such super programs. In 1960 this compiler gave the Mathematical Center a leading edge in the early development of software.
2001 Flight Mechanics Symposium
NASA Technical Reports Server (NTRS)
Lynch, John P. (Editor)
2001-01-01
This conference publication includes papers and abstracts presented at the Flight Mechanics Symposium held on June 19-21, 2001. Sponsored by the Guidance, Navigation and Control Center of Goddard Space Flight Center, this symposium featured technical papers on a wide range of issues related to attitude/orbit determination, prediction and control; attitude simulation; attitude sensor calibration; theoretical foundation of attitude computation; dynamics model improvements; autonomous navigation; constellation design and formation flying; estimation theory and computational techniques; Earth environment mission analysis and design; and, spacecraft re-entry mission design and operations.
The psychology of computer displays in the modern mission control center
NASA Technical Reports Server (NTRS)
Granaas, Michael M.; Rhea, Donald C.
1988-01-01
Work at NASA's Western Aeronautical Test Range (WATR) has demonstrated the need for increased consideration of psychological factors in the design of computer displays for the WATR mission control center. These factors include color perception, memory load, and cognitive processing abilities. A review of relevant work in the human factors psychology area is provided to demonstrate the need for this awareness. The information provided should be relevant in control room settings where computerized displays are being used.
1983-05-01
Parallel Computation that Assign Canonical Object-Based Frames of Refer- ence," Proc. 7th it. .nt. Onf. on Artifcial Intellig nce (IJCAI-81), Vol. 2...Perception of Linear Struc- ture in Imaged Data ." TN 276, Artiflci!.a Intelligence Center, SRI International, Feb. 1983. [Fram75] J.P. Frain and E.S...1983 May 1983 D C By: Martin A. Fischler, Program Director S ELECTE Principal Investigator, (415)859-5106 MAY 2 21990 Artificial Intelligence Center
1979-01-01
language education in recent years can be seen in the movement from a teacher- to a learner -centered approach. The best evidence of a teacher-centered...most dramatic effect on how the learner is viewed. The learner now is recognized as an active participant in the learning process rather than as a... best , is optional. Cognitive psychologists, such as Ausubel, and many foreign language educators (Rivers, 1976; Grittner, 1977) believe that practice
Simulation Applications at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Inouye, M.
1984-01-01
Aeronautical applications of simulation technology at Ames Research Center are described. The largest wind tunnel in the world is used to determine the flow field and aerodynamic characteristics of various aircraft, helicopter, and missile configurations. Large computers are used to obtain similar results through numerical solutions of the governing equations. Capabilities are illustrated by computer simulations of turbulence, aileron buzz, and an exhaust jet. Flight simulators are used to assess the handling qualities of advanced aircraft, particularly during takeoff and landing.
Data centers as dispatchable loads to harness stranded power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kibaek; Yang, Fan; Zavala, Victor M.
Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less
Data centers as dispatchable loads to harness stranded power
Kim, Kibaek; Yang, Fan; Zavala, Victor M.; ...
2016-07-20
Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less
System analysis for the Huntsville Operation Support Center distributed computer system
NASA Technical Reports Server (NTRS)
Ingels, F. M.
1986-01-01
A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Case, Jonathan L.; Venner, Jason; Moreno-Madrinan, Max. J.; Delgado, Francisco
2012-01-01
Over the past two years, scientists in the Earth Science Office at NASA fs Marshall Space Flight Center (MSFC) have explored opportunities to apply cloud computing concepts to support near real ]time weather forecast modeling via the Weather Research and Forecasting (WRF) model. Collaborators at NASA fs Short ]term Prediction Research and Transition (SPoRT) Center and the SERVIR project at Marshall Space Flight Center have established a framework that provides high resolution, daily weather forecasts over Mesoamerica through use of the NASA Nebula Cloud Computing Platform at Ames Research Center. Supported by experts at Ames, staff at SPoRT and SERVIR have established daily forecasts complete with web graphics and a user interface that allows SERVIR partners access to high resolution depictions of weather in the next 48 hours, useful for monitoring and mitigating meteorological hazards such as thunderstorms, heavy precipitation, and tropical weather that can lead to other disasters such as flooding and landslides. This presentation will describe the framework for establishing and providing WRF forecasts, example applications of output provided via the SERVIR web portal, and early results of forecast model verification against available surface ] and satellite ]based observations.
NASA Astrophysics Data System (ADS)
Molthan, A.; Case, J.; Venner, J.; Moreno-Madriñán, M. J.; Delgado, F.
2012-12-01
Over the past two years, scientists in the Earth Science Office at NASA's Marshall Space Flight Center (MSFC) have explored opportunities to apply cloud computing concepts to support near real-time weather forecast modeling via the Weather Research and Forecasting (WRF) model. Collaborators at NASA's Short-term Prediction Research and Transition (SPoRT) Center and the SERVIR project at Marshall Space Flight Center have established a framework that provides high resolution, daily weather forecasts over Mesoamerica through use of the NASA Nebula Cloud Computing Platform at Ames Research Center. Supported by experts at Ames, staff at SPoRT and SERVIR have established daily forecasts complete with web graphics and a user interface that allows SERVIR partners access to high resolution depictions of weather in the next 48 hours, useful for monitoring and mitigating meteorological hazards such as thunderstorms, heavy precipitation, and tropical weather that can lead to other disasters such as flooding and landslides. This presentation will describe the framework for establishing and providing WRF forecasts, example applications of output provided via the SERVIR web portal, and early results of forecast model verification against available surface- and satellite-based observations.
Flight and Ground Instructor Knowledge Test Guide
DOT National Transportation Integrated Search
1994-01-01
The FAA has available hundreds of computer testing centers nationwide. These testing centers offer the full range of airman knowledge tests including military competence, instrument foreign pilot, and pilot examiner screening tests. Refer to appendix...
2009-09-15
Obama Administration launches Cloud Computing Initiative at Ames Research Center. Vivek Kundra, White House Chief Federal Information Officer (right) and Lori Garver, NASA Deputy Administrator (left) get a tour & demo NASAS Supercomputing Center Hyperwall by Chris Kemp.
High-Performance Computing Data Center Power Usage Effectiveness |
Power Usage Effectiveness When the Energy Systems Integration Facility (ESIF) was conceived, NREL set an , ventilation, and air conditioning (HVAC), which captures fan walls, fan coils that support the data center
For operation of the Computer Software Management and Information Center (COSMIC)
NASA Technical Reports Server (NTRS)
Carmon, J. L.
1983-01-01
Computer programs for degaussing, magnetic field calculation, low speed wing flap systems aerodynamics, structural panel analysis, dynamic stress/strain data acquisition, allocation and network scheduling, and digital filters are discussed.
Brief Survey of TSC Computing Facilities
DOT National Transportation Integrated Search
1972-05-01
The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...
Computer Center: BASIC String Models of Genetic Information Transfer.
ERIC Educational Resources Information Center
Spain, James D., Ed.
1984-01-01
Discusses some of the major genetic information processes which may be modeled by computer program string manipulation, focusing on replication and transcription. Also discusses instructional applications of using string models. (JN)
NASA Astrophysics Data System (ADS)
Kafai, Yasmin B.; Peppler, Kylie A.; Chiu, Grace M.
For the last twenty years, issues of the digital divide have driven efforts around the world to address the lack of access to computers and the Internet, pertinent and language appropriate content, and technical skills in low-income communities (Schuler & Day, 2004a and b). The title of our paper makes reference to a milestone publication (Schon, Sanyal, & Mitchell, 1998) that showcased some of the early work and thinking in this area. Schon, Sanyal and Mitchell's book edition included an article outlining the Computer Clubhouse, a type of community technology center model, which was developed to create opportunities for youth in low-income communities to become creators and designers of technologies by (1998). The model has been very successful scaling up, with over 110 Computer Clubhouses now in existence worldwide.
Tashiro, Yasutaka; Okazaki, Ken; Iwamoto, Yukihide
2015-01-01
Purpose We aimed to clarify the distance between the anteromedial (AM) bundle and posterolateral (PL) bundle tunnel-aperture centers by simulating the anatomical femoral tunnel placement during double-bundle anterior cruciate ligament reconstruction using 3-D computer-aided design models of the knee, in order to discuss the risk of tunnel overlap. Relationships between the AM to PL center distance, body height, and sex difference were also analyzed. Patients and methods The positions of the AM and PL tunnel centers were defined based on previous studies using the quadrant method, and were superimposed anatomically onto the 3-D computer-aided design knee models from 68 intact femurs. The distance between the tunnel centers was measured using the 3-D DICOM software package. The correlation between the AM–PL distance and the subject’s body height was assessed, and a cutoff height value for a higher risk of overlap of the AM and PL tunnel apertures was identified. Results The distance between the AM and PL centers was 10.2±0.6 mm in males and 9.4±0.5 mm in females (P<0.01). The AM–PL center distance demonstrated good correlation with body height in both males (r=0.66, P<0.01) and females (r=0.63, P<0.01). When 9 mm was defined as the critical distance between the tunnel centers to preserve a 2 mm bony bridge between the two tunnels, the cutoff value was calculated to be a height of 160 cm in males and 155 cm in females. Conclusion When AM and PL tunnels were placed anatomically in simulated double-bundle anterior cruciate ligament reconstruction, the distance between the two tunnel centers showed a strong positive correlation with body height. In cases with relatively short stature, the AM and PL tunnel apertures are considered to be at a higher risk of overlap when surgeons choose the double-bundle technique. PMID:26170727
PNNLâs Building Operations Control Center
Belew, Shan
2018-01-16
PNNL's Building Operations Control Center (BOCC) video provides an overview of the center, its capabilities, and its objectives. The BOCC was relocated to PNNL's new 3820 Systems Engineering Building in 2015. Although a key focus of the BOCC is on monitoring and improving the operations of PNNL buildings, the center's state-of-the-art computational, software and visualization resources also have provided a platform for PNNL buildings-related research projects.
A Horizontal Tilt Correction Method for Ship License Numbers Recognition
NASA Astrophysics Data System (ADS)
Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi
2018-02-01
An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.
Carney, Timothy Jay; Morgan, Geoffrey P.; Jones, Josette; McDaniel, Anna M.; Weaver, Michael; Weiner, Bryan; Haggstrom, David A.
2014-01-01
Our conceptual model demonstrates our goal to investigate the impact of clinical decision support (CDS) utilization on cancer screening improvement strategies in the community health care (CHC) setting. We employed a dual modeling technique using both statistical and computational modeling to evaluate impact. Our statistical model used the Spearman’s Rho test to evaluate the strength of relationship between our proximal outcome measures (CDS utilization) against our distal outcome measure (provider self-reported cancer screening improvement). Our computational model relied on network evolution theory and made use of a tool called Construct-TM to model the use of CDS measured by the rate of organizational learning. We employed the use of previously collected survey data from community health centers Cancer Health Disparities Collaborative (HDCC). Our intent is to demonstrate the added valued gained by using a computational modeling tool in conjunction with a statistical analysis when evaluating the impact a health information technology, in the form of CDS, on health care quality process outcomes such as facility-level screening improvement. Significant simulated disparities in organizational learning over time were observed between community health centers beginning the simulation with high and low clinical decision support capability. PMID:24953241
A Fluid Structure Algorithm with Lagrange Multipliers to Model Free Swimming
NASA Astrophysics Data System (ADS)
Sahin, Mehmet; Dilek, Ezgi
2017-11-01
A new monolithic approach is prosed to solve the fluid-structure interaction (FSI) problem with Lagrange multipliers in order to model free swimming/flying. In the present approach, the fluid domain is modeled by the incompressible Navier-Stokes equations and discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the stable side-centered unstructured finite volume method. The solid domain is modeled by the constitutive laws for the nonlinear Saint Venant-Kirchhoff material and the classical Galerkin finite element method is used to discretize the governing equations in a Lagrangian frame. In order to impose the body motion/deformation, the distance between the constraint pair nodes is imposed using the Lagrange multipliers, which is independent from the frame of reference. The resulting algebraic linear equations are solved in a fully coupled manner using a dual approach (null space method). The present numerical algorithm is initially validated for the classical FSI benchmark problems and then applied to the free swimming of three linked ellipses. The authors are grateful for the use of the computing resources provided by the National Center for High Performance Computing (UYBHM) under Grant Number 10752009 and the computing facilities at TUBITAK-ULAKBIM, High Performance and Grid Computing Center.
Mitchell, Shannon Gwin; Monico, Laura B; Gryczynski, Jan; O'Grady, Kevin E; Schwartz, Robert P
2015-01-01
The use of computers for identifying and intervening with stigmatized behaviors, such as drug use, offers promise for underserved, rural areas; however, the acceptability and appropriateness of using computerized brief intervention (CBIs) must be taken into consideration. In the present study, 12 staff members representing a range of clinic roles in two rural, federally qualified health centers completed semi-structured interviews in a qualitative investigation of CBI vs. counselor-delivered individual brief intervention (IBI). Thematic content analysis was conducted using a constant comparative method, examining the range of responses within each interview as well as data across interview respondents. Overall, staff found the idea of providing CBIs both acceptable and appropriate for their patient population. Acceptability by clinic staff centered on the ready availability of the CBI. Staff also believed that patients might be more forthcoming in response to a computer program than a personal interview. However, some staff voiced reservations concerning the appropriateness of CBIs for subsets of patients, including older patients, illiterate individuals, or those unfamiliar with computers. Findings support the potential suitability and potential benefits of providing CBIs to patients in rural health centers.
Sub-Pixel Extraction of Laser Stripe Center Using an Improved Gray-Gravity Method †
Li, Yuehua; Zhou, Jingbo; Huang, Fengshan; Liu, Lijian
2017-01-01
Laser stripe center extraction is a key step for the profile measurement of line structured light sensors (LSLS). To accurately obtain the center coordinates at sub-pixel level, an improved gray-gravity method (IGGM) was proposed. Firstly, the center points of the stripe were computed using the gray-gravity method (GGM) for all columns of the image. By fitting these points using the moving least squares algorithm, the tangential vector, the normal vector and the radius of curvature can be robustly obtained. One rectangular region could be defined around each of the center points. Its two sides that are parallel to the tangential vector could alter their lengths according to the radius of the curvature. After that, the coordinate for each center point was recalculated within the rectangular region and in the direction of the normal vector. The center uncertainty was also analyzed based on the Monte Carlo method. The obtained experimental results indicate that the IGGM is suitable for both the smooth stripes and the ones with sharp corners. The high accuracy center points can be obtained at a relatively low computation cost. The measured results of the stairs and the screw surface further demonstrate the effectiveness of the method. PMID:28394288
Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility
NASA Technical Reports Server (NTRS)
Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer
2009-01-01
Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).
Computer Technology for Industry
NASA Technical Reports Server (NTRS)
1979-01-01
In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.
Center removal amount control of magnetorheological finishing process by spiral polishing way
NASA Astrophysics Data System (ADS)
Wang, Yajun; He, Jianguo; Ji, Fang; Huang, Wen; Xiao, Hong; Luo, Qing; Zheng, Yongcheng
2010-10-01
Spiral polishing is a traditional process of computer-controlled optical surfacing. However, the additional polishing amount is great and the center polishing amount is difficult to control. At first, a simplified mathematics model is presented for magnetorheological finishing, which indicates that the center polishing amount and additional polishing amount are proportional to the length and peak value of magnetorheological finishing influence function, and are inversely proportional to pitch and rotation rate of spiral track, and the center polishing amount is much bigger than average polishing amount. Secondly, the relationships of "tool feed way and center polishing amount", "spiral pitch and calculation accuracy of influence matrix for dwell time function solution", "spiral pitch and center polishing amount" and "peak removal rate, dimensions of removal function and center removal amount" are studied by numerical computation by Archimedes spiral path. It shows that the center polishing amount is much bigger in feed stage than that in backhaul stage when the head of influence function is towards workpiece edge in feeding; and the bigger pitch, the bigger calculation error of influence matrix elements; and the bigger pitch, the smaller center polishing amount, and the smaller peak removal rate and dimensions of removal function, the smaller center removal amount. At last, the polishing results are given, which indicates that the center polishing amount is acceptable with a suitable polishing amount rate of feed stage and backhaul stage, and with a suitable spiral pitch during magnetorheological finishing procedure by spiral motion way.
NOAA/West coast and Alaska Tsunami warning center Atlantic Ocean response criteria
Whitmore, P.; Refidaff, C.; Caropolo, M.; Huerfano-Moreno, V.; Knight, W.; Sammler, W.; Sandrik, A.
2009-01-01
West Coast/Alaska Tsunami Warning Center (WCATWC) response criteria for earthquakesoccurring in the Atlantic and Caribbean basins are presented. Initial warning center decisions are based on an earthquake's location, magnitude, depth, distance from coastal locations, and precomputed threat estimates based on tsunami models computed from similar events. The new criteria will help limit the geographical extent of warnings and advisories to threatened regions, and complement the new operational tsunami product suite. Criteria are set for tsunamis generated by earthquakes, which are by far the main cause of tsunami generation (either directly through sea floor displacement or indirectly by triggering of sub-sea landslides).The new criteria require development of a threat data base which sets warning or advisory zones based on location, magnitude, and pre-computed tsunami models. The models determine coastal tsunami amplitudes based on likely tsunami source parameters for a given event. Based on the computed amplitude, warning and advisory zones are pre-set.
Production Management System for AMS Computing Centres
NASA Astrophysics Data System (ADS)
Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.
2017-10-01
The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.
EHR use and patient satisfaction: What we learned.
Farber, Neil J; Liu, Lin; Chen, Yunan; Calvitti, Alan; Street, Richard L; Zuest, Danielle; Bell, Kristin; Gabuzda, Mark; Gray, Barbara; Ashfaq, Shazia; Agha, Zia
2015-11-01
Few studies have quantitatively examined the degree to which the use of the computer affects patients' satisfaction with the clinician and the quality of the visit. We conducted a study to examine this association. Twenty-three clinicians (21 internal medicine physicians, 2 nurse practitioners) were recruited from 4 Veteran Affairs Medical Center (VAMC) clinics located in San Diego, Calif. Five to 6 patients for most clinicians (one patient each for 2 of the clinicians) were recruited to participate in a study of patient-physician communication. The clinicians' computer use and the patient-clinician interactions in the exam room were captured in real time via video recordings of the interactions and the computer screen, and through the use of the Morae usability testing software system, which recorded clinician clicks and scrolls on the computer. After the visit, patients were asked to complete a satisfaction survey. The final sample consisted of 126 consultations. Total patient satisfaction (beta=0.014; P=.027) and patient satisfaction with patient-centered communication (beta=0.02; P=.02) were significantly associated with higher clinician “gaze time” at the patient. A higher percentage of gaze time during a visit (controlling for the length of the visit) was significantly associated with greater satisfaction with patient-centered communication (beta=0.628; P=.033). Higher clinician gaze time at the patient predicted greater patient satisfaction. This suggests that clinicians would be well served to refine their multitasking skills so that they communicate in a patient-centered manner while performing necessary computer-related tasks. These findings also have important implications for clinical training with respect to using an electronic health record (EHR) system in ways that do not impede the one-on-one conversation between clinician and patient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Pierre-Henri, E-mail: maire@celia.u-bordeaux1.fr; Abgrall, Rémi, E-mail: remi.abgrall@math.u-bordeau1.fr; Breil, Jérôme, E-mail: breil@celia.u-bordeaux1.fr
2013-02-15
In this paper, we describe a cell-centered Lagrangian scheme devoted to the numerical simulation of solid dynamics on two-dimensional unstructured grids in planar geometry. This numerical method, utilizes the classical elastic-perfectly plastic material model initially proposed by Wilkins [M.L. Wilkins, Calculation of elastic–plastic flow, Meth. Comput. Phys. (1964)]. In this model, the Cauchy stress tensor is decomposed into the sum of its deviatoric part and the thermodynamic pressure which is defined by means of an equation of state. Regarding the deviatoric stress, its time evolution is governed by a classical constitutive law for isotropic material. The plasticity model employs themore » von Mises yield criterion and is implemented by means of the radial return algorithm. The numerical scheme relies on a finite volume cell-centered method wherein numerical fluxes are expressed in terms of sub-cell force. The generic form of the sub-cell force is obtained by requiring the scheme to satisfy a semi-discrete dissipation inequality. Sub-cell force and nodal velocity to move the grid are computed consistently with cell volume variation by means of a node-centered solver, which results from total energy conservation. The nominally second-order extension is achieved by developing a two-dimensional extension in the Lagrangian framework of the Generalized Riemann Problem methodology, introduced by Ben-Artzi and Falcovitz [M. Ben-Artzi, J. Falcovitz, Generalized Riemann Problems in Computational Fluid Dynamics, Cambridge Monogr. Appl. Comput. Math. (2003)]. Finally, the robustness and the accuracy of the numerical scheme are assessed through the computation of several test cases.« less
Scientific Computing Strategic Plan for the Idaho National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiting, Eric Todd
Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less