QUEST Hanford Site Computer Users - What do they do?
DOE Office of Scientific and Technical Information (OSTI.GOV)
WITHERSPOON, T.T.
2000-03-02
The Fluor Hanford Chief Information Office requested that a computer-user survey be conducted to determine the user's dependence on the computer and its importance to their ability to accomplish their work. Daily use trends and future needs of Hanford Site personal computer (PC) users was also to be defined. A primary objective was to use the data to determine how budgets should be focused toward providing those services that are truly needed by the users.
Interfacing HTCondor-CE with OpenStack
NASA Astrophysics Data System (ADS)
Bockelman, B.; Caballero Bejar, J.; Hover, J.
2017-10-01
Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.
Optimizing Targeting of Intrusion Detection Systems in Social Networks
NASA Astrophysics Data System (ADS)
Puzis, Rami; Tubi, Meytal; Elovici, Yuval
Internet users communicate with each other in various ways: by Emails, instant messaging, social networking, accessing Web sites, etc. In the course of communicating, users may unintentionally copy files contaminated with computer viruses and worms [1, 2] to their computers and spread them to other users [3]. (Hereafter we will use the term "threats", rather than computer viruses and computer worms). The Internet is the chief source of these threats [4].
Improving ATLAS grid site reliability with functional tests using HammerCloud
NASA Astrophysics Data System (ADS)
Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan
2012-12-01
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.
ERIC Educational Resources Information Center
Rollo, J. Michael; Marmarchev, Helen L.
1999-01-01
The explosion of computer applications in the modern workplace has required student affairs professionals to keep pace with technological advances for office productivity. This article recommends establishing an administrative computer user groups, utilizing coordinated web site development, and enhancing working relationships as ways of dealing…
Defining Information Needs of Computer Users: A Human Communication Problem.
ERIC Educational Resources Information Center
Kimbrough, Kenneth L.
This exploratory investigation of the process of defining the information needs of computer users and the impact of that process on information retrieval focuses on communication problems. Six sites were visited that used computers to process data or to provide information, including the California Department of Transportation, the California…
FY 72 Computer Utilization at the Transportation Systems Center
DOT National Transportation Integrated Search
1972-08-01
The Transportation Systems Center currently employs a medley of on-site and off-site computer systems to obtain the computational support it requires. Examination of the monthly User Accountability Reports for FY72 indicated that during the fiscal ye...
Robotic Telepresence: Perception, Performance, and User Experience
2012-02-01
defined as “a human-computer-machine condition in which a user receives sufficient information about a remote, real-world site through a machine so...that the user feels physically present at the remote, real-world site ” (Aliberti and Bruen, 2006). Telepresence often includes capabilities for a more...outdoor route reconnaissance course (figures 4 and 5) was located at the Molnar MOUT (Military Operations in Urban Terrain) site in Fort Benning, GA. It
Social networking among upper extremity patients.
Rozental, Tamara D; George, Tina M; Chacko, Aron T
2010-05-01
Despite their rising popularity, the health care profession has been slow to embrace social networking sites. These are Web-based initiatives, designed to bring people with common interests or activities under a common umbrella. The purpose of this study is to evaluate social networking patterns among upper extremity patients. A total of 742 anonymous questionnaires were distributed among upper extremity outpatients, with a 62% response rate (462 were completed). Demographic characteristics (gender, age, level of education, employment, type of health insurance, and income stratification) were defined, and data on computer ownership and frequency of social networking use were collected. Social network users and nonusers were compared according to their demographic and socioeconomic characteristics. Our patient cohort consisted of 450 patients. Of those 450 patients, 418 had a high school education or higher, and 293 reported a college or graduate degree. The majority of patients (282) were employed at the time of the survey, and income was evenly distributed among U.S. Census Bureau quintiles. A total of 349 patients reported computer ownership, and 170 reported using social networking sites. When compared to nonusers, social networking users were younger (p<.001), more educated (p<.001), and more likely to be employed (p = .013). Users also had higher income levels (p=0.028) and had high rates of computer ownership (p<.001). Multivariate regression revealed that younger age (p<.001), computer ownership (p<.001), and higher education (p<.001) were independent predictors of social networking use. Most users (n = 114) regularly visit a single site. Facebook was the most popular site visited (n=142), followed by MySpace (n=28) and Twitter (n=16). Of the 450 upper extremity patients in our sample, 170 use social networking sites. Younger age, higher level of education, and computer ownership were associated with social networking use. Physicians should consider expanding their use of social networking sites to reach their online patient populations. Copyright 2010 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
SiteDB: Marshalling people and resources available to CMS
NASA Astrophysics Data System (ADS)
Metson, S.; Bonacorsi, D.; Dias Ferreira, M.; Egeland, R.
2010-04-01
In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use "user friendly" names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.
NASA Astrophysics Data System (ADS)
Sareen, Sanjay; Gupta, Sunil Kumar; Sood, Sandeep K.
2017-10-01
Zika virus is a mosquito-borne disease that spreads very quickly in different parts of the world. In this article, we proposed a system to prevent and control the spread of Zika virus disease using integration of Fog computing, cloud computing, mobile phones and the Internet of things (IoT)-based sensor devices. Fog computing is used as an intermediary layer between the cloud and end users to reduce the latency time and extra communication cost that is usually found high in cloud-based systems. A fuzzy k-nearest neighbour is used to diagnose the possibly infected users, and Google map web service is used to provide the geographic positioning system (GPS)-based risk assessment to prevent the outbreak. It is used to represent each Zika virus (ZikaV)-infected user, mosquito-dense sites and breeding sites on the Google map that help the government healthcare authorities to control such risk-prone areas effectively and efficiently. The proposed system is deployed on Amazon EC2 cloud to evaluate its performance and accuracy using data set for 2 million users. Our system provides high accuracy of 94.5% for initial diagnosis of different users according to their symptoms and appropriate GPS-based risk assessment.
Kon, Haruka; Kobayashi, Hiroshi; Sakurai, Naoki; Watanabe, Kiyoshi; Yamaga, Yoshiro; Ono, Takahiro
2017-11-01
The aim of the present study was to clarify differences between personal computer (PC)/mobile device combination and PC-only user patterns. We analyzed access frequency and time spent on a complete denture preclinical website in order to maximize website effectiveness. Fourth-year undergraduate students (N=41) in the preclinical complete denture laboratory course were invited to participate in this survey during the final week of the course to track login data. Students accessed video demonstrations and quizzes via our e-learning site/course program, and were instructed to view online demonstrations before classes. When the course concluded, participating students filled out a questionnaire about the program, their opinions, and devices they had used to access the site. Combination user access was significantly more frequent than PC-only during supplementary learning time, indicating that students with mobile devices studied during lunch breaks and before morning classes. Most students had favorable opinions of the e-learning site, but a few combination users commented that some videos were too long and that descriptive answers were difficult on smartphones. These results imply that mobile devices' increased accessibility encouraged learning by enabling more efficient time use between classes. They also suggest that e-learning system improvements should cater to mobile device users by reducing video length and including more short-answer questions. © 2016 John Wiley & Sons Australia, Ltd.
Oha, Kristel; Animägi, Liina; Pääsuke, Mati; Coggon, David; Merisalu, Eda
2014-05-28
Occupational use of computers has increased rapidly over recent decades, and has been linked with various musculoskeletal disorders, which are now the most commonly diagnosed occupational diseases in Estonia. The aim of this study was to assess the prevalence of musculoskeletal pain (MSP) by anatomical region during the past 12 months and to investigate its association with personal characteristics and work-related risk factors among Estonian office workers using computers. In a cross-sectional survey, the questionnaires were sent to the 415 computer users. Data were collected by self-administered questionnaire from 202 computer users at two universities in Estonia. The questionnaire asked about MSP at different anatomical sites, and potential individual and work related risk factors. Associations with risk factors were assessed by logistic regression. Most respondents (77%) reported MSP in at least one anatomical region during the past 12 months. Most prevalent was pain in the neck (51%), followed by low back pain (42%), wrist/hand pain (35%) and shoulder pain (30%). Older age, right-handedness, not currently smoking, emotional exhaustion, belief that musculoskeletal problems are commonly caused by work, and low job security were the statistically significant risk factors for MSP in different anatomical sites. A high prevalence of MSP in the neck, low back, wrist/arm and shoulder was observed among Estonian computer users. Psychosocial risk factors were broadly consistent with those reported from elsewhere. While computer users should be aware of ergonomic techniques that can make their work easier and more comfortable, presenting computer use as a serious health hazard may modify health beliefs in a way that is unhelpful.
Bodine, M.W.
1987-01-01
The FORTRAN 77 computer program CLAYFORM apportions the constituents of a conventional chemical analysis of a silicate mineral into a user-selected structure formula. If requested, such as for a clay mineral or other phyllosilicate, the program distributes the structural formula components into appropriate default or user-specified structural sites (tetrahedral, octahedral, interlayer, hydroxyl, and molecular water sites), and for phyllosilicates calculates the layer (tetrahedral, octahedral, and interlayer) charge distribution. The program also creates data files of entered analyses for subsequent reuse. ?? 1987.
Use of a secure Internet Web site for collaborative medical research.
Marshall, W W; Haley, R W
2000-10-11
Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.
Pest management in Douglas-fir seed orchards: a microcomputer decision method
James B. Hoy; Michael I. Haverty
1988-01-01
The computer program described provides a Douglas-fir seed orchard manager (user) with a quantitative method for making insect pest management decisions on a desk-top computer. The decision system uses site-specific information such as estimates of seed crop size, insect attack rates, insecticide efficacy and application costs, weather, and crop value. At sites where...
The microcomputer scientific software series 8: the SYCOOR users manual.
Edgar E. Gutierrez-Espeleta; Gary J. Brand
1993-01-01
Describes how to use SYCOOR, an interactive Macintosh program written in BASIC for computing and adjusting synecological coordinates. Site synecological coordinates are indices of moisture, nutrients, heat, and light computed from lists of plant species present at the site. Graphs of a species` distribution in moisture-nutrient and heat-light space are also displayed...
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
NASA Technical Reports Server (NTRS)
Bishop, Matt
1990-01-01
Password selection has long been a difficult issue; traditionally, passwords are either assigned by the computer or chosen by the user. When the computer does the assignment, the passwords are often hard to remember; when the user makes the selection, the passwords are often easy to guess. This paper describes a technique, and a mechanism, to allow users to select passwords which to them are easy to remember but to others would be very difficult to guess. The technique is site, user, and group compatible, and allows rapid changing of constraints imposed upon the password. Although experience with this technique is limited, it appears to have much promise.
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
None
2018-02-07
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
National Stormwater Calculator User's Guide – VERSION 1.1
This document is the user's guide for running EPA's National Stormwater Calculator (http://www.epa.gov/nrmrl/wswrd/wq/models/swc/). The National Stormwater Calculator is a simple to use tool for computing small site hydrology for any location within the US.
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-09-30
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
SplicingTypesAnno: annotating and quantifying alternative splicing events for RNA-Seq data.
Sun, Xiaoyong; Zuo, Fenghua; Ru, Yuanbin; Guo, Jiqiang; Yan, Xiaoyan; Sablok, Gaurav
2015-04-01
Alternative splicing plays a key role in the regulation of the central dogma. Four major types of alternative splicing have been classified as intron retention, exon skipping, alternative 5 splice sites or alternative donor sites, and alternative 3 splice sites or alternative acceptor sites. A few algorithms have been developed to detect splice junctions from RNA-Seq reads. However, there are few tools targeting at the major alternative splicing types at the exon/intron level. This type of analysis may reveal subtle, yet important events of alternative splicing, and thus help gain deeper understanding of the mechanism of alternative splicing. This paper describes a user-friendly R package, extracting, annotating and analyzing alternative splicing types for sequence alignment files from RNA-Seq. SplicingTypesAnno can: (1) provide annotation for major alternative splicing at exon/intron level. By comparing the annotation from GTF/GFF file, it identifies the novel alternative splicing sites; (2) offer a convenient two-level analysis: genome-scale annotation for users with high performance computing environment, and gene-scale annotation for users with personal computers; (3) generate a user-friendly web report and additional BED files for IGV visualization. SplicingTypesAnno is a user-friendly R package for extracting, annotating and analyzing alternative splicing types at exon/intron level for sequence alignment files from RNA-Seq. It is publically available at https://sourceforge.net/projects/splicingtypes/files/ or http://genome.sdau.edu.cn/research/software/SplicingTypesAnno.html. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Tool for Statistical Analysis and Display of Landing Sites
NASA Technical Reports Server (NTRS)
Wawrzyniak, Geoffrey; Kennedy, Brian; Knocke, Philip; Michel, John
2006-01-01
MarsLS is a software tool for analyzing statistical dispersion of spacecraft-landing sites and displaying the results of its analyses. Originally intended for the Mars Explorer Rover (MER) mission, MarsLS is also applicable to landing sites on Earth and non-MER sites on Mars. MarsLS is a collection of interdependent MATLAB scripts that utilize the MATLAB graphical-user-interface software environment to display landing-site data (see figure) on calibrated image-maps of the Martian or other terrain. The landing-site data comprise latitude/longitude pairs generated by Monte Carlo runs of other computer programs that simulate entry, descent, and landing. Using these data, MarsLS can compute a landing-site ellipse a standard means of depicting the area within which the spacecraft can be expected to land with a given probability. MarsLS incorporates several features for the user s convenience, including capabilities for drawing lines and ellipses, overlaying kilometer or latitude/longitude grids, drawing and/or specifying lines and/or points, entering notes, defining and/or displaying polygons to indicate hazards or areas of interest, and evaluating hazardous and/or scientifically interesting areas. As part of such an evaluation, MarsLS can compute the probability of landing in a specified polygonal area.
Dynamic Interactions for Network Visualization and Simulation
2009-03-01
projects.htm, Site accessed January 5, 2009. 12. John S. Weir, Major, USAF, Mediated User-Simulator Interactive Command with Visualization ( MUSIC -V). Master’s...Computing Sciences in Colleges, December 2005). 14. Enrique Campos -Nanez, “nscript user manual,” Department of System Engineer- ing University of
Improved Interactive Medical-Imaging System
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Twombly, Ian A.; Senger, Steven
2003-01-01
An improved computational-simulation system for interactive medical imaging has been invented. The system displays high-resolution, three-dimensional-appearing images of anatomical objects based on data acquired by such techniques as computed tomography (CT) and magnetic-resonance imaging (MRI). The system enables users to manipulate the data to obtain a variety of views for example, to display cross sections in specified planes or to rotate images about specified axes. Relative to prior such systems, this system offers enhanced capabilities for synthesizing images of surgical cuts and for collaboration by users at multiple, remote computing sites.
ERIC Educational Resources Information Center
Descy, Don E.
1993-01-01
This introduction to the Internet with examples for Macintosh computer users demonstrates the ease of using e-mail, participating on discussion group listservs, logging in to remote sites using Telnet, and obtaining resources using the File Transfer Protocol (FTP). Included are lists of discussion groups, Telnet sites, and FTP Archive sites. (EA)
Computer and internet use by persons after traumatic spinal cord injury.
Goodman, Naomi; Jette, Alan M; Houlihan, Bethlyn; Williams, Steve
2008-08-01
To determine whether computer and internet use by persons post spinal cord injury (SCI) is sufficiently prevalent and broad-based to consider using this technology as a long-term treatment modality for patients who have sustained SCI. A multicenter cohort study. Twenty-six past and current U.S. regional Model Spinal Cord Injury Systems. Patients with traumatic SCI (N=2926) with follow-up interviews between 2004 and 2006, conducted at 1 or 5 years postinjury. Not applicable. Results revealed that 69.2% of participants with SCI used a computer; 94.2% of computer users accessed the internet. Among computer users, 19.1% used assistive devices for computer access. Of the internet users, 68.6% went online 5 to 7 days a week. The most frequent use for internet was e-mail (90.5%) and shopping sites (65.8%), followed by health sites (61.1%). We found no statistically significant difference in computer use by sex or level of neurologic injury, and no difference in internet use by level of neurologic injury. Computer and internet access differed significantly by age, with use decreasing as age group increased. The highest computer and internet access rates were seen among participants injured before the age of 18. Computer and internet use varied by race: 76% of white compared with 46% of black subjects were computer users (P<.001), and 95.3% of white respondents who used computers used the internet, compared with 87.6% of black respondents (P<.001). Internet use increased with education level (P<.001): eighty-six percent of participants who did not graduate from high school or receive a degree used the internet, while over 97% of those with a college or associate's degree did. While the internet holds considerable potential as a long-term treatment modality after SCI, limited access to the internet by those who are black, those injured after age 18, and those with less education does reduce its usefulness in the short term for these subgroups.
Lim, Huat Chye; Curlin, Marcel E; Mittler, John E
2011-11-01
Computer simulation models can be useful in exploring the efficacy of HIV therapy regimens in preventing the evolution of drug-resistant viruses. Current modeling programs, however, were designed by researchers with expertise in computational biology, limiting their accessibility to those who might lack such a background. We have developed a user-friendly graphical program, HIV Therapy Simulator (HIVSIM), that is accessible to non-technical users. The program allows clinicians and researchers to explore the effectiveness of various therapeutic strategies, such as structured treatment interruptions, booster therapies and induction-maintenance therapies. We anticipate that HIVSIM will be useful for evaluating novel drug-based treatment concepts in clinical research, and as an educational tool. HIV Therapy Simulator is freely available for Mac OS and Windows at http://sites.google.com/site/hivsimulator/. jmittler@uw.edu. Supplementary data are available at Bioinformatics online.
Interactive computation of coverage regions for indoor wireless communication
NASA Astrophysics Data System (ADS)
Abbott, A. Lynn; Bhat, Nitin; Rappaport, Theodore S.
1995-12-01
This paper describes a system which assists in the strategic placement of rf base stations within buildings. Known as the site modeling tool (SMT), this system allows the user to display graphical floor plans and to select base station transceiver parameters, including location and orientation, interactively. The system then computes and highlights estimated coverage regions for each transceiver, enabling the user to assess the total coverage within the building. For single-floor operation, the user can choose between distance-dependent and partition- dependent path-loss models. Similar path-loss models are also available for the case of multiple floors. This paper describes the method used by the system to estimate coverage for both directional and omnidirectional antennas. The site modeling tool is intended to be simple to use by individuals who are not experts at wireless communication system design, and is expected to be very useful in the specification of indoor wireless systems.
Item-Based Top-N Recommendation Algorithms
2003-01-20
basket of items, utilized by many e-commerce sites, cannot take advantage of pre-computed user-to-user similarities. Finally, even though the...not discriminate between items that are present in frequent itemsets and items that are not, while still maintaining the computational advantages of...453219 0.02% 7.74 ccard 42629 68793 398619 0.01% 9.35 ecommerce 6667 17491 91222 0.08% 13.68 em 8002 1648 769311 5.83% 96.14 ml 943 1682 100000 6.31
Human Centered Computing for Mars Exploration
NASA Technical Reports Server (NTRS)
Trimble, Jay
2005-01-01
The science objectives are to determine the aqueous, climatic, and geologic history of a site on Mars where conditions may have been favorable to the preservation of evidence of prebiotic or biotic processes. Human Centered Computing is a development process that starts with users and their needs, rather than with technology. The goal is a system design that serves the user, where the technology fits the task and the complexity is that of the task not of the tool.
Coupled Ocean/Atmospheric Mesoscale Prediction System (COAMPS), Version 5.0 (User’s Guide)
2010-03-30
provides tools for common modeling functions, as well as regridding, data decomposition, and communication on parallel computers. NRL/MR/7320--10...specified gncomDir. If running COAMPS at the DSRC (e.g. BABBAGE, DAVINCI , or EINSTEIN), the global NCOM files will be copied to /scr/[user]/COAMPS/data...the site (DSRC or local) and the platform (BABBAGE. DAVINCI , EINSTEIN, or local machine) on which COAMPS is being run. site=navy_dsrc (for DSRC
Broering, N C
1983-01-01
Georgetown University's Library Information System (LIS), an integrated library system designed and implemented at the Dahlgren Memorial Library, is broadly described from an administrative point of view. LIS' functional components consist of eight "user-friendly" modules: catalog, circulation, serials, bibliographic management (including Mini-MEDLINE), acquisitions, accounting, networking, and computer-assisted instruction. This article touches on emerging library services, user education, and computer information services, which are also changing the role of staff librarians. The computer's networking capability brings the library directly to users through personal or institutional computers at remote sites. The proposed Integrated Medical Center Information System at Georgetown University will include interface with LIS through a network mechanism. LIS is being replicated at other libraries, and a microcomputer version is being tested for use in a hospital setting. PMID:6688749
BITNET: Past, Present, and Future.
ERIC Educational Resources Information Center
Oberst, Daniel J.; Smith, Sheldon B.
1986-01-01
Discusses history and development of the academic computer network BITNET, including BITNET Network Support Center's growth and services, and international expansion. Network users, reasons for growth, and future developments are reviewed. A BITNET applications sampler and listings of compatible computers and operating systems, sites, and…
Modelling parallel programs and multiprocessor architectures with AXE
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.
1991-01-01
AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.
SITEQUAL--A User's Guide: Computerized Site Evaluation for 14 Southern Hardwood Species
Constance A. Harrington; Bettina M. Casson
1986-01-01
An interactive computer program, SITEQUAL, has been developed from the widely-used Baker and Broadfoot field guides, which evaluate site quality for 14 southern hardwood tree species. The SITEQUAL program calculates site index for all species simultaneously and provides a breakdown of site index into the component contributions by each of the four major soil factors...
Spinozzi, Giulio; Calabria, Andrea; Brasca, Stefano; Beretta, Stefano; Merelli, Ivan; Milanesi, Luciano; Montini, Eugenio
2017-11-25
Bioinformatics tools designed to identify lentiviral or retroviral vector insertion sites in the genome of host cells are used to address the safety and long-term efficacy of hematopoietic stem cell gene therapy applications and to study the clonal dynamics of hematopoietic reconstitution. The increasing number of gene therapy clinical trials combined with the increasing amount of Next Generation Sequencing data, aimed at identifying integration sites, require both highly accurate and efficient computational software able to correctly process "big data" in a reasonable computational time. Here we present VISPA2 (Vector Integration Site Parallel Analysis, version 2), the latest optimized computational pipeline for integration site identification and analysis with the following features: (1) the sequence analysis for the integration site processing is fully compliant with paired-end reads and includes a sequence quality filter before and after the alignment on the target genome; (2) an heuristic algorithm to reduce false positive integration sites at nucleotide level to reduce the impact of Polymerase Chain Reaction or trimming/alignment artifacts; (3) a classification and annotation module for integration sites; (4) a user friendly web interface as researcher front-end to perform integration site analyses without computational skills; (5) the time speedup of all steps through parallelization (Hadoop free). We tested VISPA2 performances using simulated and real datasets of lentiviral vector integration sites, previously obtained from patients enrolled in a hematopoietic stem cell gene therapy clinical trial and compared the results with other preexisting tools for integration site analysis. On the computational side, VISPA2 showed a > 6-fold speedup and improved precision and recall metrics (1 and 0.97 respectively) compared to previously developed computational pipelines. These performances indicate that VISPA2 is a fast, reliable and user-friendly tool for integration site analysis, which allows gene therapy integration data to be handled in a cost and time effective fashion. Moreover, the web access of VISPA2 ( http://openserver.itb.cnr.it/vispa/ ) ensures accessibility and ease of usage to researches of a complex analytical tool. We released the source code of VISPA2 in a public repository ( https://bitbucket.org/andreacalabria/vispa2 ).
Evaluation of Item-Based Top-N Recommendation Algorithms
2000-09-15
Furthermore, one of the advantages of the item-based algorithm is that it has much smaller computational require- 11 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ecommerce ...items, utilized by many e-commerce sites, cannot take advantage of pre-computed user-to-user similarities. Consequently, even though the throughput of...Non-Zeros ecommerce 6667 17491 91222 catalog 50918 39080 435524 ccard 42629 68793 398619 skills 4374 2125 82612 movielens 943 1682 100000 Table 1: The
NASA Technical Reports Server (NTRS)
Forney, J. A.; Walker, D.; Lanier, M.
1979-01-01
Computer program, SHCOST, was used to perform economic analyses of operational test sites. The program allows consideration of the economic parameters which are important to the solar system user. A life cycle cost and cash flow comparison is made between a solar heating system and a conventional system. The program assists in sizing the solar heating system. A sensitivity study and plot capability allow the user to select the most cost effective system configuration.
User’s Guide for the Coupled Ocean/Atmospheric Mesoscale Prediction System (COAMPS) Version 5.0
2010-03-30
provides tools for common modeling functions, as well as regridding, data decomposition, and communication on parallel computers. NRL/MR/7320...specified gncomDir. If running COAMPS at the DSRC (e.g. BABBAGE, DAVINCI , or EINSTEIN), the global NCOM files will be copied to /scr/[user]/COAMPS/data...the site (DSRC or local) and the platform (BABBAGE. DAVINCI , EINSTEIN, or local machine) on which COAMPS is being run. site=navy_dsrc (for DSRC
Future Approach to tier-0 extension
NASA Astrophysics Data System (ADS)
Jones, B.; McCance, G.; Cordeiro, C.; Giordano, D.; Traylen, S.; Moreno García, D.
2017-10-01
The current tier-0 processing at CERN is done on two managed sites, the CERN computer centre and the Wigner computer centre. With the proliferation of public cloud resources at increasingly competitive prices, we have been investigating how to transparently increase our compute capacity to include these providers. The approach taken has been to integrate these resources using our existing deployment and computer management tools and to provide them in a way that exposes them to users as part of the same site. The paper will describe the architecture, the toolset and the current production experiences of this model.
NASA Astrophysics Data System (ADS)
McCubbine, Jack; Tontini, Fabio Caratori; Stagpoole, Vaughan; Smith, Euan; O'Brien, Grant
2018-01-01
A Python program (Gsolve) with a graphical user interface has been developed to assist with routine data processing of relative gravity measurements. Gsolve calculates the gravity at each measurement site of a relative gravity survey, which is referenced to at least one known gravity value. The tidal effects of the sun and moon, gravimeter drift and tares in the data are all accounted for during the processing of the survey measurements. The calculation is based on a least squares formulation where the difference between the absolute gravity at each surveyed location and parameters relating to the dynamics of the gravimeter are minimized with respect to the relative gravity observations, and some supplied gravity reference site values. The program additionally allows the user to compute free air gravity anomalies, with respect to the GRS80 and GRS67 reference ellipsoids, from the determined gravity values and calculate terrain corrections at each of the surveyed sites using a prism formula and a user supplied digital elevation model. This paper reviews the mathematical framework used to reduce relative gravimeter survey observations to gravity values. It then goes on to detail how the processing steps can be implemented using the software.
Building a Prototype of LHC Analysis Oriented Computing Centers
NASA Astrophysics Data System (ADS)
Bagliesi, G.; Boccali, T.; Della Ricca, G.; Donvito, G.; Paganoni, M.
2012-12-01
A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.
Central Data Processing System (CDPS) user's manual: Solar heating and cooling program
NASA Technical Reports Server (NTRS)
1976-01-01
The software and data base management system required to assess the performance of solar heating and cooling systems installed at multiple sites is presented. The instrumentation data associated with these systems is collected, processed, and presented in a form which supported continuity of performance evaluation across all applications. The CDPS consisted of three major elements: communication interface computer, central data processing computer, and performance evaluation data base. Users of the performance data base were identified, and procedures for operation, and guidelines for software maintenance were outlined. The manual also defined the output capabilities of the CDPS in support of external users of the system.
National Stormwater Calculator User's Guide - Version 1.2
The National Stormwater Calculator is a simple to use tool for computing small site hydrology for any location within the US. It estimates the amount of stormwater runoff generated from a site under different development and control scenarios over a long term period of historica...
gLExec: gluing grid computing to the Unix world
NASA Astrophysics Data System (ADS)
Groep, D.; Koeroo, O.; Venekamp, G.
2008-07-01
The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system.
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
ERIC Educational Resources Information Center
O'Hanlon, Charlene
2009-01-01
Since laptop programs extend instruction beyond the campus, it is then a must for schools to try a variety of solutions to protect their machines--and their users. Combining high-tech safeguards with face-to-face user education is a must for schools whose laptop programs allow students to take the computers off-site. For a cautionary tale on the…
unless the user has internet access on the same machine. The products, including metadata, that are on is an access portal to GEONETCast products. It is a searchable database that can be found at -channel. This version will run on a local computer at a user site but internet links will not function
The document describes RAETRAD-F (RAdon Emanation and TRAnsport into Dwellings--Florida), a computer code that provides a simple way to analyze site-specific soil measurements to estimate upper-limit indoor radon concentrations in a reference house on a site. The code uses data f...
Online Tools for Astronomy and Cosmochemistry
NASA Technical Reports Server (NTRS)
Meyer, B. S.
2005-01-01
Over the past year, the Webnucleo Group at Clemson University has been developing a web site with a number of interactive online tools for astronomy and cosmochemistry applications. The site uses SHP (Simplified Hypertext Preprocessor), which, because of its flexibility, allows us to embed almost any computer language into our web pages. For a description of SHP, please see http://www.joeldenny.com/ At our web site, an internet user may mine large and complex data sets, such as our stellar evolution models, and make graphs or tables of the results. The user may also run some of our detailed nuclear physics and astrophysics codes, such as our nuclear statistical equilibrium code, which is written in fortran and C. Again, the user may make graphs and tables and download the results.
CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research
Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C.
2014-01-01
The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction. PMID:24904400
CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research.
Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C
2014-01-01
The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction.
Social Media: Strategic Asset or Operational Vulnerability?
2012-05-04
Marine Corps message indicated that social networking sites “are particularly high risk due to information exposure, user generated content, and...Immediate Ban of Internet Social Networking Sites on Marine Corps Enterprise Network NIPRNET,” U.S. Marine Corps, accessed April 3, 2012, http... networking sites via the DoD’s unclassified computer network. The memorandum provided guidance on official use of social networking sites as well
Lee, Casey J.; Murphy, Jennifer C.; Crawford, Charles G.; Deacon, Jeffrey R.
2017-10-24
The U.S. Geological Survey publishes information on concentrations and loads of water-quality constituents at 111 sites across the United States as part of the U.S. Geological Survey National Water Quality Network (NWQN). This report details historical and updated methods for computing water-quality loads at NWQN sites. The primary updates to historical load estimation methods include (1) an adaptation to methods for computing loads to the Gulf of Mexico; (2) the inclusion of loads computed using the Weighted Regressions on Time, Discharge, and Season (WRTDS) method; and (3) the inclusion of loads computed using continuous water-quality data. Loads computed using WRTDS and continuous water-quality data are provided along with those computed using historical methods. Various aspects of method updates are evaluated in this report to help users of water-quality loading data determine which estimation methods best suit their particular application.
Enhanced, Partially Redundant Emergency Notification System
NASA Technical Reports Server (NTRS)
Pounds, Clark D.
2005-01-01
The Johnson Space Center Emergency Notification System (JENS) software utilizes pre-existing computation and communication infrastructure to augment a prior variable-tone, siren-based, outdoor alarm system, in order to enhance the ability to give notice of emergencies to employees working in multiple buildings. The JENS software includes a component that implements an administrative Web site. Administrators can grant and deny access to the administrative site and to an originator Web site that enables authorized individuals to quickly compose and issue alarms. The originator site also facilitates maintenance and review of alarms already issued. A custom client/server application program enables an originator to notify every user who is logged in on a Microsoft Windows-based desktop computer by means of a pop-up message that interrupts, but does not disrupt, the user s work. Alternatively or in addition, the originator can send an alarm message to recipients on an e-mail distribution list and/or can post the notice on an internal Web site. An alarm message can consist of (1) text describing the emergency and suggesting a course of action and (2) a replica of the corresponding audible outdoor alarm.
ERIC Educational Resources Information Center
Subramony, Deepak Prem
Gutman's means-end theory, widely used in market research, identifies three levels of abstraction: attributes, consequences, and values--associated with the use of products, representing the process by which physical attributes of products gain personal meaning for users. The primary methodological manifestation of means-end theory is the…
Computer-based route-definition system for peripheral bronchoscopy.
Graham, Michael W; Gibbs, Jason D; Higgins, William E
2012-04-01
Multi-detector computed tomography (MDCT) scanners produce high-resolution images of the chest. Given a patient's MDCT scan, a physician can use an image-guided intervention system to first plan and later perform bronchoscopy to diagnostic sites situated deep in the lung periphery. An accurate definition of complete routes through the airway tree leading to the diagnostic sites, however, is vital for avoiding navigation errors during image-guided bronchoscopy. We present a system for the robust definition of complete airway routes suitable for image-guided bronchoscopy. The system incorporates both automatic and semiautomatic MDCT analysis methods for this purpose. Using an intuitive graphical user interface, the user invokes automatic analysis on a patient's MDCT scan to produce a series of preliminary routes. Next, the user visually inspects each route and quickly corrects the observed route defects using the built-in semiautomatic methods. Application of the system to a human study for the planning and guidance of peripheral bronchoscopy demonstrates the efficacy of the system.
Embedded Web Technology: Internet Technology Applied to Real-Time System Control
NASA Technical Reports Server (NTRS)
Daniele, Carl J.
1998-01-01
The NASA Lewis Research Center is developing software tools to bridge the gap between the traditionally non-real-time Internet technology and the real-time, embedded-controls environment for space applications. Internet technology has been expanding at a phenomenal rate. The simple World Wide Web browsers (such as earlier versions of Netscape, Mosaic, and Internet Explorer) that resided on personal computers just a few years ago only enabled users to log into and view a remote computer site. With current browsers, users not only view but also interact with remote sites. In addition, the technology now supports numerous computer platforms (PC's, MAC's, and Unix platforms), thereby providing platform independence.In contrast, the development of software to interact with a microprocessor (embedded controller) that is used to monitor and control a space experiment has generally been a unique development effort. For each experiment, a specific graphical user interface (GUI) has been developed. This procedure works well for a single-user environment. However, the interface for the International Space Station (ISS) Fluids and Combustion Facility will have to enable scientists throughout the world and astronauts onboard the ISS, using different computer platforms, to interact with their experiments in the Fluids and Combustion Facility. Developing a specific GUI for all these users would be cost prohibitive. An innovative solution to this requirement, developed at Lewis, is to use Internet technology, where the general problem of platform independence has already been partially solved, and to leverage this expanding technology as new products are developed. This approach led to the development of the Embedded Web Technology (EWT) program at Lewis, which has the potential to significantly reduce software development costs for both flight and ground software.
Eurogrid: a new glideinWMS based portal for CDF data analysis
NASA Astrophysics Data System (ADS)
Amerio, S.; Benjamin, D.; Dost, J.; Compostella, G.; Lucchesi, D.; Sfiligoi, I.
2012-12-01
The CDF experiment at Fermilab ended its Run-II phase on September 2011 after 11 years of operations and 10 fb-1 of collected data. CDF computing model is based on a Central Analysis Farm (CAF) consisting of local computing and storage resources, supported by OSG and LCG resources accessed through dedicated portals. At the beginning of 2011 a new portal, Eurogrid, has been developed to effectively exploit computing and disk resources in Europe: a dedicated farm and storage area at the TIER-1 CNAF computing center in Italy, and additional LCG computing resources at different TIER-2 sites in Italy, Spain, Germany and France, are accessed through a common interface. The goal of this project is to develop a portal easy to integrate in the existing CDF computing model, completely transparent to the user and requiring a minimum amount of maintenance support by the CDF collaboration. In this paper we will review the implementation of this new portal, and its performance in the first months of usage. Eurogrid is based on the glideinWMS software, a glidein based Workload Management System (WMS) that works on top of Condor. As CDF CAF is based on Condor, the choice of the glideinWMS software was natural and the implementation seamless. Thanks to the pilot jobs, user-specific requirements and site resources are matched in a very efficient way, completely transparent to the users. Official since June 2011, Eurogrid effectively complements and supports CDF computing resources offering an optimal solution for the future in terms of required manpower for administration, support and development.
Rapid Prototyping of Hydrologic Model Interfaces with IPython
NASA Astrophysics Data System (ADS)
Farthing, M. W.; Winters, K. D.; Ahmadia, A. J.; Hesser, T.; Howington, S. E.; Johnson, B. D.; Tate, J.; Kees, C. E.
2014-12-01
A significant gulf still exists between the state of practice and state of the art in hydrologic modeling. Part of this gulf is due to the lack of adequate pre- and post-processing tools for newly developed computational models. The development of user interfaces has traditionally lagged several years behind the development of a particular computational model or suite of models. As a result, models with mature interfaces often lack key advancements in model formulation, solution methods, and/or software design and technology. Part of the problem has been a focus on developing monolithic tools to provide comprehensive interfaces for the entire suite of model capabilities. Such efforts require expertise in software libraries and frameworks for creating user interfaces (e.g., Tcl/Tk, Qt, and MFC). These tools are complex and require significant investment in project resources (time and/or money) to use. Moreover, providing the required features for the entire range of possible applications and analyses creates a cumbersome interface. For a particular site or application, the modeling requirements may be simplified or at least narrowed, which can greatly reduce the number and complexity of options that need to be accessible to the user. However, monolithic tools usually are not adept at dynamically exposing specific workflows. Our approach is to deliver highly tailored interfaces to users. These interfaces may be site and/or process specific. As a result, we end up with many, customized interfaces rather than a single, general-use tool. For this approach to be successful, it must be efficient to create these tailored interfaces. We need technology for creating quality user interfaces that is accessible and has a low barrier for integration into model development efforts. Here, we present efforts to leverage IPython notebooks as tools for rapid prototyping of site and application-specific user interfaces. We provide specific examples from applications in near-shore environments as well as levee analysis. We discuss our design decisions and methodology for developing customized interfaces, strategies for delivery of the interfaces to users in various computing environments, as well as implications for the design/implementation of simulation models.
... Website You are exiting the Department of Labor's Web server. The Department of Labor does not endorse, ... the use of copyrighted materials contained in linked Web sites. Users must request such authorization from the ...
Schopf, Jennifer M.; Nitzberg, Bill
2002-01-01
The design and implementation of a national computing system and data grid has become a reachable goal from both the computer science and computational science point of view. A distributed infrastructure capable of sophisticated computational functions can bring many benefits to scientific work, but poses many challenges, both technical and socio-political. Technical challenges include having basic software tools, higher-level services, functioning and pervasive security, and standards, while socio-political issues include building a user community, adding incentives for sites to be part of a user-centric environment, and educating funding sources about the needs of this community. This paper details the areasmore » relating to Grid research that we feel still need to be addressed to fully leverage the advantages of the Grid.« less
Big Data Smart Socket (BDSS): a system that abstracts data transfer habits from end users.
Watts, Nicholas A; Feltus, Frank A
2017-02-15
The ability to centralize and store data for long periods on an end user's computational resources is increasingly difficult for many scientific disciplines. For example, genomics data is increasingly large and distributed, and the data needs to be moved into workflow execution sites ranging from lab workstations to the cloud. However, the typical user is not always informed on emerging network technology or the most efficient methods to move and share data. Thus, the user defaults to using inefficient methods for transfer across the commercial internet. To accelerate large data transfer, we created a tool called the Big Data Smart Socket (BDSS) that abstracts data transfer methodology from the user. The user provides BDSS with a manifest of datasets stored in a remote storage repository. BDSS then queries a metadata repository for curated data transfer mechanisms and optimal path to move each of the files in the manifest to the site of workflow execution. BDSS functions as a standalone tool or can be directly integrated into a computational workflow such as provided by the Galaxy Project. To demonstrate applicability, we use BDSS within a biological context, although it is applicable to any scientific domain. BDSS is available under version 2 of the GNU General Public License at https://github.com/feltus/BDSS . ffeltus@clemson.edu. © The Author 2016. Published by Oxford University Press.
Hand held data collection and monitoring system for nuclear facilities
Brayton, D.D.; Scharold, P.G.; Thornton, M.W.; Marquez, D.L.
1999-01-26
Apparatus and method is disclosed for a data collection and monitoring system that utilizes a pen based hand held computer unit which has contained therein interaction software that allows the user to review maintenance procedures, collect data, compare data with historical trends and safety limits, and input new information at various collection sites. The system has a means to allow automatic transfer of the collected data to a main computer data base for further review, reporting, and distribution purposes and uploading updated collection and maintenance procedures. The hand held computer has a running to-do list so sample collection and other general tasks, such as housekeeping are automatically scheduled for timely completion. A done list helps users to keep track of all completed tasks. The built-in check list assures that work process will meet the applicable processes and procedures. Users can hand write comments or drawings with an electronic pen that allows the users to directly interface information on the screen. 15 figs.
Hand held data collection and monitoring system for nuclear facilities
Brayton, Darryl D.; Scharold, Paul G.; Thornton, Michael W.; Marquez, Diana L.
1999-01-01
Apparatus and method is disclosed for a data collection and monitoring system that utilizes a pen based hand held computer unit which has contained therein interaction software that allows the user to review maintenance procedures, collect data, compare data with historical trends and safety limits, and input new information at various collection sites. The system has a means to allow automatic transfer of the collected data to a main computer data base for further review, reporting, and distribution purposes and uploading updated collection and maintenance procedures. The hand held computer has a running to-do list so sample collection and other general tasks, such as housekeeping are automatically scheduled for timely completion. A done list helps users to keep track of all completed tasks. The built-in check list assures that work process will meet the applicable processes and procedures. Users can hand write comments or drawings with an electronic pen that allows the users to directly interface information on the screen.
Rational analyses of information foraging on the web.
Pirolli, Peter
2005-05-06
This article describes rational analyses and cognitive models of Web users developed within information foraging theory. This is done by following the rational analysis methodology of (a) characterizing the problems posed by the environment, (b) developing rational analyses of behavioral solutions to those problems, and (c) developing cognitive models that approach the realization of those solutions. Navigation choice is modeled as a random utility model that uses spreading activation mechanisms that link proximal cues (information scent) that occur in Web browsers to internal user goals. Web-site leaving is modeled as an ongoing assessment by the Web user of the expected benefits of continuing at a Web site as opposed to going elsewhere. These cost-benefit assessments are also based on spreading activation models of information scent. Evaluations include a computational model of Web user behavior called Scent-Based Navigation and Information Foraging in the ACT Architecture, and the Law of Surfing, which characterizes the empirical distribution of the length of paths of visitors at a Web site. 2005 Lawrence Erlbaum Associates, Inc.
Computer Workstation: Pointer/Mouse
... Website You are exiting the Department of Labor's Web server. The Department of Labor does not endorse, ... the use of copyrighted materials contained in linked Web sites. Users must request such authorization from the ...
StreamStats in North Carolina: a water-resources Web application
Weaver, J. Curtis; Terziotti, Silvia; Kolb, Katharine R.; Wagner, Chad R.
2012-01-01
A statewide StreamStats application for North Carolina was developed in cooperation with the North Carolina Department of Transportation following completion of a pilot application for the upper French Broad River basin in western North Carolina (Wagner and others, 2009). StreamStats for North Carolina, available at http://water.usgs.gov/osw/streamstats/north_carolina.html, is a Web-based Geographic Information System (GIS) application developed by the U.S. Geological Survey (USGS) in consultation with Environmental Systems Research Institute, Inc. (Esri) to provide access to an assortment of analytical tools that are useful for water-resources planning and management (Ries and others, 2008). The StreamStats application provides an accurate and consistent process that allows users to easily obtain streamflow statistics, basin characteristics, and descriptive information for USGS data-collection sites and user-selected ungaged sites. In the North Carolina application, users can compute 47 basin characteristics and peak-flow frequency statistics (Weaver and others, 2009; Robbins and Pope, 1996) for a delineated drainage basin. Selected streamflow statistics and basin characteristics for data-collection sites have been compiled from published reports and also are immediately accessible by querying individual sites from the web interface. Examples of basin characteristics that can be computed in StreamStats include drainage area, stream slope, mean annual precipitation, and percentage of forested area (Ries and others, 2008). Examples of streamflow statistics that were previously available only through published documents include peak-flow frequency, flow-duration, and precipitation data. These data are valuable for making decisions related to bridge design, floodplain delineation, water-supply permitting, and sustainable stream quality and ecology. The StreamStats application also allows users to identify stream reaches upstream and downstream from user-selected sites and obtain information for locations along streams where activities occur that may affect streamflow conditions. This functionality can be accessed through a map-based interface with the user’s Web browser, or individual functions can be requested remotely through Web services (Ries and others, 2008).
Cooperative high-performance storage in the accelerated strategic computing initiative
NASA Technical Reports Server (NTRS)
Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark
1996-01-01
The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.
Computer Workstations: Good Working Positions
... Website You are exiting the Department of Labor's Web server. The Department of Labor does not endorse, ... the use of copyrighted materials contained in linked Web sites. Users must request such authorization from the ...
Computer Workstations: Wrist/Palm Supports
... Website You are exiting the Department of Labor's Web server. The Department of Labor does not endorse, ... the use of copyrighted materials contained in linked Web sites. Users must request such authorization from the ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, B.
The Energy Research program may be on the verge of abdicating an important role it has traditionally played in the development and use of state-of-the-art computer systems. The lack of easy access to Class VI systems coupled to the easy availability of local, user-friendly systems is conspiring to drive many investigators away from forefront research in computational science and in the use of state-of-the-art computers for more discipline-oriented problem solving. The survey conducted under the auspices of this contract clearly demonstrates a significant suppressed demand for actual Class VI hours totaling the full capacity of one such system. The currentmore » usage is about a factor of 15 below this level. There is also a need for about 50% more capacity in the current mini/midi availability. Meeting the needs of the ER community for this level of computing power and capacity is most probably best achieved through the establishment of a central Class VI capability at some site linked through a nationwide network to the various ER laboratories and universities and interfaced with the local user-friendly systems at those remote sites.« less
NASA Astrophysics Data System (ADS)
Knosp, B.; Neely, S.; Zimdars, P.; Mills, B.; Vance, N.
2007-12-01
The Microwave Limb Sounder (MLS) Science Computing Facility (SCF) stores over 50 terabytes of data, has over 240 computer processing hosts, and 64 users from around the world. These resources are spread over three primary geographical locations - the Jet Propulsion Laboratory (JPL), Raytheon RIS, and New Mexico Institute of Mining and Technology (NMT). A need for a grid network system was identified and defined to solve the problem of users competing for finite, and increasingly scarce, MLS SCF computing resources. Using Sun's Grid Engine software, a grid network was successfully created in a development environment that connected the JPL and Raytheon sites, established master and slave hosts, and demonstrated that transfer queues for jobs can work among multiple clusters in the same grid network. This poster will first describe MLS SCF resources and the lessons that were learned in the design and development phase of this project. It will then go on to discuss the test environment and plans for deployment by highlighting benchmarks and user experiences.
Airport-Noise Levels and Annoyance Model (ALAMO) user's guide
NASA Technical Reports Server (NTRS)
Deloach, R.; Donaldson, J. L.; Johnson, M. J.
1986-01-01
A guide for the use of the Airport-Noise Level and Annoyance MOdel (ALAMO) at the Langley Research Center computer complex is provided. This document is divided into 5 primary sections, the introduction, the purpose of the model, and an in-depth description of the following subsystems: baseline, noise reduction simulation and track analysis. For each subsystem, the user is provided with a description of architecture, an explanation of subsystem use, sample results, and a case runner's check list. It is assumed that the user is familiar with the operations at the Langley Research Center (LaRC) computer complex, the Network Operating System (NOS 1.4) and CYBER Control Language. Incorporated within the ALAMO model is a census database system called SITE II.
Evolution of user analysis on the grid in ATLAS
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Legger, F.; ATLAS Collaboration
2017-10-01
More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. L. Sisterson
2010-01-12
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent approximately daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY 2010 for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 x 2,208); for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 x 2,208); and for the Tropical Western Pacific (TWP) locale is 1,876.8 hours (0.85 x 2,208). The ARM Mobile Facility (AMF) deployment in Graciosa Island, the Azores, Portugal, continues; its OPSMAX time this quarter is 2,097.60 hours (0.95 x 2,208). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are the result of downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. The Site Access Request System is a web-based database used to track visitors to the fixed and mobile sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP locale has historically had a central facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. Beginning this quarter, the SGP began a transition to a smaller footprint (150 km x 150 km) by rearranging the original and new instrumentation made available through the American Recovery and Reinvestment Act (ARRA). The central facility and 4 extended facilities will remain, but there will be up to 16 surface new characterization facilities, 4 radar facilities, and 3 profiler facilities sited in the smaller domain. This new configuration will provide observations at scales more appropriate to current and future climate models. The TWP locale has the Manus, Nauru, and Darwin sites. These sites will also have expanded measurement capabilities with the addition of new instrumentation made available through ARRA funds. It is anticipated that the new instrumentation at all the fixed sites will be in place within the next 12 months. The AMF continues its 20-month deployment in Graciosa Island, Azores, Portugal, that started May 1, 2009. The AMF will also have additional observational capabilities within the next 12 months. Users can participate in field experiments at the sites and mobile facility, or they can participate remotely. Therefore, a variety of mechanisms are provided to users to access site information. Users who have immediate (real-time) needs for data access can request a research account on the local site data systems. This access is particularly useful to users for quick decisions in executing time-dependent activities associated with field campaigns at the fixed sites and mobile facility locations. The eight computers for the research accounts are located at the Barrow and Atqasuk sites; the SGP central facility; the TWP Manus, Nauru, and Darwin sites; the AMF; and the DMF at PNNL. However, users are warned that the data provided at the time of collection have not been fully screened for quality and therefore are not considered to be official ACRF data. Hence, these accounts are considered to be part of the facility activities associated with field campaign activities, and users are tracked. In addition, users who visit sites can connect their computer or instrument to an ACRF site data system network, which requires an on-site device account. Remote (off-site) users can also have remote access to any ACRF instrument or computer system at any ACRF site, which requires an off-site device account. These accounts are also managed and tracked.« less
Big Data Smart Socket (BDSS): a system that abstracts data transfer habits from end users
Watts, Nicholas A.
2017-01-01
Motivation: The ability to centralize and store data for long periods on an end user’s computational resources is increasingly difficult for many scientific disciplines. For example, genomics data is increasingly large and distributed, and the data needs to be moved into workflow execution sites ranging from lab workstations to the cloud. However, the typical user is not always informed on emerging network technology or the most efficient methods to move and share data. Thus, the user defaults to using inefficient methods for transfer across the commercial internet. Results: To accelerate large data transfer, we created a tool called the Big Data Smart Socket (BDSS) that abstracts data transfer methodology from the user. The user provides BDSS with a manifest of datasets stored in a remote storage repository. BDSS then queries a metadata repository for curated data transfer mechanisms and optimal path to move each of the files in the manifest to the site of workflow execution. BDSS functions as a standalone tool or can be directly integrated into a computational workflow such as provided by the Galaxy Project. To demonstrate applicability, we use BDSS within a biological context, although it is applicable to any scientific domain. Availability and Implementation: BDSS is available under version 2 of the GNU General Public License at https://github.com/feltus/BDSS. Contact: ffeltus@clemson.edu PMID:27797780
Price comparisons on the internet based on computational intelligence.
Kim, Jun Woo; Ha, Sung Ho
2014-01-01
Information-intensive Web services such as price comparison sites have recently been gaining popularity. However, most users including novice shoppers have difficulty in browsing such sites because of the massive amount of information gathered and the uncertainty surrounding Web environments. Even conventional price comparison sites face various problems, which suggests the necessity of a new approach to address these problems. Therefore, for this study, an intelligent product search system was developed that enables price comparisons for online shoppers in a more effective manner. In particular, the developed system adopts linguistic price ratings based on fuzzy logic to accommodate user-defined price ranges, and personalizes product recommendations based on linguistic product clusters, which help online shoppers find desired items in a convenient manner.
Price Comparisons on the Internet Based on Computational Intelligence
Kim, Jun Woo; Ha, Sung Ho
2014-01-01
Information-intensive Web services such as price comparison sites have recently been gaining popularity. However, most users including novice shoppers have difficulty in browsing such sites because of the massive amount of information gathered and the uncertainty surrounding Web environments. Even conventional price comparison sites face various problems, which suggests the necessity of a new approach to address these problems. Therefore, for this study, an intelligent product search system was developed that enables price comparisons for online shoppers in a more effective manner. In particular, the developed system adopts linguistic price ratings based on fuzzy logic to accommodate user-defined price ranges, and personalizes product recommendations based on linguistic product clusters, which help online shoppers find desired items in a convenient manner. PMID:25268901
A Job Monitoring and Accounting Tool for the LSF Batch System
NASA Astrophysics Data System (ADS)
Sarkar, Subir; Taneja, Sonia
2011-12-01
This paper presents a web based job monitoring and group-and-user accounting tool for the LSF Batch System. The user oriented job monitoring displays a simple and compact quasi real-time overview of the batch farm for both local and Grid jobs. For Grid jobs the Distinguished Name (DN) of the Grid users is shown. The overview monitor provides the most up-to-date status of a batch farm at any time. The accounting tool works with the LSF accounting log files. The accounting information is shown for a few pre-defined time periods by default. However, one can also compute the same information for any arbitrary time window. The tool already proved to be an extremely useful means to validate more extensive accounting tools available in the Grid world. Several sites have already been using the present tool and more sites running the LSF batch system have shown interest. We shall discuss the various aspects that make the tool essential for site administrators and end-users alike and outline the current status of development as well as future plans.
PAD_AUDIT -- PAD Auditing Package
NASA Astrophysics Data System (ADS)
Clayton, C. A.
The PAD (Packet Assembler Disassembler) utility is the part of the VAX/VMS Coloured Book Software (CBS) which allows a user to log onto remote computers from a local VAX. Unfortunately, logging into a computer via either the Packet SwitchStream (PSS) or the International Packet SwitchStream (IPSS) costs real money. Some users either do not appreciate this or do not care and have been known to clock up rather large quarterly bills. This software package allows a system manager to determine who has used PAD to call where and (most importantly) how much it has cost. The system manager can then take appropriate action - either charging the individuals, warning them to use the facility with more care or even denying access to a greedy user to one or more sites.
Interoperating Cloud-based Virtual Farms
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.
2015-12-01
The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.
Automated CFD Database Generation for a 2nd Generation Glide-Back-Booster
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Rogers, Stuart E.; Aftosmis, Michael J.; Pandya, Shishir A.; Ahmad, Jasim U.; Tejmil, Edward
2003-01-01
A new software tool, AeroDB, is used to compute thousands of Euler and Navier-Stokes solutions for a 2nd generation glide-back booster in one week. The solution process exploits a common job-submission grid environment using 13 computers located at 4 different geographical sites. Process automation and web-based access to the database greatly reduces the user workload, removing much of the tedium and tendency for user input errors. The database consists of forces, moments, and solution files obtained by varying the Mach number, angle of attack, and sideslip angle. The forces and moments compare well with experimental data. Stability derivatives are also computed using a monotone cubic spline procedure. Flow visualization and three-dimensional surface plots are used to interpret and characterize the nature of computed flow fields.
The Recovery of a Clinical Database Management System after Destruction by Fire *
Covvey, H.D.; McAlister, N.H.; Greene, J.; Wigle, E.D.
1981-01-01
In August 1980 a fire in the Cardiovascular Unit at Toronto General Hospital severely damaged the physical plant and rendered all on-site equipment unrecoverable. Among the hardware items in the fire was the computer which supports our cardiovascular database system. Within hours after the fire it was determined that the computer was no longer serviceable. Beyond off-site back-up tapes, there was the possibility that recent records on the computer had suffered a similar fate. Immediate procedures were instituted to obtain a replacement computer system and to clean media to permit data recovery. Within 2 months a partial system was supporting all users, and all data was recovered and being used. The destructive potential of a fire is rarely seriously considered relative to computer equipment in our clinical environments. Full-replacement value insurance; an excellent equipment supplier with the capacity to respond to an emergency; backup and recovery procedures with off-site storage; and dedicated staff are key hedges against disaster.
Using USNO's API to Obtain Data
NASA Astrophysics Data System (ADS)
Lesniak, Michael V.; Pozniak, Daniel; Punnoose, Tarun
2015-01-01
The U.S. Naval Observatory (USNO) is in the process of modernizing its publicly available web services into APIs (Application Programming Interfaces). Services configured as APIs offer greater flexibility to the user and allow greater usage. Depending on the particular service, users who implement our APIs will receive either a PNG (Portable Network Graphics) image or data in JSON (JavaScript Object Notation) format. This raw data can then be embedded in third-party web sites or in apps.Part of the USNO's mission is to provide astronomical and timing data to government agencies and the general public. To this end, the USNO provides accurate computations of astronomical phenomena such as dates of lunar phases, rise and set times of the Moon and Sun, and lunar and solar eclipse times. Users who navigate to our web site and select one of our 18 services are prompted to complete a web form, specifying parameters such as date, time, location, and object. Many of our services work for years between 1700 and 2100, meaning that past, present, and future events can be computed. Upon form submission, our web server processes the request, computes the data, and outputs it to the user.Over recent years, the use of the web by the general public has vastly changed. In response to this, the USNO is modernizing its web-based data services. This includes making our computed data easier to embed within third-party web sites as well as more easily querying from apps running on tablets and smart phones. To facilitate this, the USNO has begun converting its services into APIs. In addition to the existing web forms for the various services, users are able to make direct URL requests that return either an image or numerical data.To date, four of our web services have been configured to run with APIs. Two are image-producing services: "Apparent Disk of a Solar System Object" and "Day and Night Across the Earth." Two API data services are "Complete Sun and Moon Data for One Day" and "Dates of Primary Phases of the Moon." Instructions for how to use our API services as well as examples of their use can be found on one of our explanatory web pages and will be discussed here.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.
2007-03-14
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year dating back to 1998. Table 1 shows the accumulated maximum operation time (planned uptime), the actual hours of operation, and the variance (unplanned downtime) for the period October 1 through December 31, 2006, for the fixed and mobile sites. Although the AMF is currently up and running in Niamey, Niger, Africa, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. The first quarter comprises a total of 2,208 hours. For all fixed sites, the actual data availability (and therefore actual hours of operation) exceeded the individual (and well as aggregate average of the fixed sites) operational goal for the first quarter of fiscal year (FY) 2007. The Site Access Request System is a web-based database used to track visitors to the fixed sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP site has a Central Facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. The TWP locale has the Manus, Nauru, and Darwin sites. NIM represents the AMF statistics for the current deployment in Niamey, Niger, Africa. PYE represents the AMF statistics for the Point Reyes, California, past deployment in 2005. In addition, users who do not want to wait for data to be provided through the ACRF Archive can request an account on the local site data system. The eight research computers are located at the Barrow and Atqasuk sites; the SGP Central Facility; the TWP Manus, Nauru, and Darwin sites; the DMF at PNNL; and the AMF in Niger. This report provides the cumulative numbers of visitors and user accounts by site for the period January 1, 2006 - December 31, 2006. The U.S. Department of Energy requires national user facilities to report facility use by total visitor days-broken down by institution type, gender, race, citizenship, visitor role, visit purpose, and facility-for actual visitors and for active user research computer accounts. During this reporting period, the ACRF Archive did not collect data on user characteristics in this way. Work is under way to collect and report these data. Table 2 shows the summary of cumulative users for the period January 1, 2006 - December 31, 2006. For the first quarter of FY 2007, the overall number of users is up from the last reporting period. The historical data show that there is an apparent relationship between the total number of users and the 'size' of field campaigns, called Intensive Operation Periods (IOPs): larger IOPs draw more of the site facility resources, which are reflected by the number of site visits and site visit days, research accounts, and device accounts. These types of users typically collect and analyze data in near-real time for a site-specific IOP that is in progress. However, the Archive accounts represent persistent (year-to-year) ACRF data users that often mine from the entire collection of ACRF data, which mostly includes routine data from the fixed and mobile sites, as well as cumulative IOP data sets. Archive data users continue to show a steady growth, which is independent of the size of IOPs. For this quarter, the number of Archive data user accounts was 961, the highest since record-keeping began. For reporting purposes, the three ACRF sites and the AMF operate 24 hours per day, 7 days per week, and 52 weeks per year. Although the AMF is not officially collecting data this quarter, personnel are regularly involved with teardown, packing, hipping, unpacking, setup, and maintenance activities, so they are included in the safety statistics. Time is reported in days instead of hours. If any lost work time is incurred by any employee, it is counted as a workday loss. Table 3 reports the consecutive days since the last recordable or reportable injury or incident causing damage to property, equipment, or vehicle for the period October 1 - December 31, 2006. There were no recordable or lost workdays or incidents for the first quarter of FY 2007.« less
NASA Astrophysics Data System (ADS)
Thomas, V. I.; Yu, E.; Acharya, P.; Jaramillo, J.; Chowdhury, F.
2015-12-01
Maintaining and archiving accurate site metadata is critical for seismic network operations. The Advanced National Seismic System (ANSS) Station Information System (SIS) is a repository of seismic network field equipment, equipment response, and other site information. Currently, there are 187 different sensor models and 114 data-logger models in SIS. SIS has a web-based user interface that allows network operators to enter information about seismic equipment and assign response parameters to it. It allows users to log entries for sites, equipment, and data streams. Users can also track when equipment is installed, updated, and/or removed from sites. When seismic equipment configurations change for a site, SIS computes the overall gain of a data channel by combining the response parameters of the underlying hardware components. Users can then distribute this metadata in standardized formats such as FDSN StationXML or dataless SEED. One powerful advantage of SIS is that existing data in the repository can be leveraged: e.g., new instruments can be assigned response parameters from the Incorporated Research Institutions for Seismology (IRIS) Nominal Response Library (NRL), or from a similar instrument already in the inventory, thereby reducing the amount of time needed to determine parameters when new equipment (or models) are introduced into a network. SIS is also useful for managing field equipment that does not produce seismic data (eg power systems, telemetry devices or GPS receivers) and gives the network operator a comprehensive view of site field work. SIS allows users to generate field logs to document activities and inventory at sites. Thus, operators can also use SIS reporting capabilities to improve planning and maintenance of the network. Queries such as how many sensors of a certain model are installed or what pieces of equipment have active problem reports are just a few examples of the type of information that is available to SIS users.
Akuna: An Open Source User Environment for Managing Subsurface Simulation Workflows
NASA Astrophysics Data System (ADS)
Freedman, V. L.; Agarwal, D.; Bensema, K.; Finsterle, S.; Gable, C. W.; Keating, E. H.; Krishnan, H.; Lansing, C.; Moeglein, W.; Pau, G. S. H.; Porter, E.; Scheibe, T. D.
2014-12-01
The U.S. Department of Energy (DOE) is investing in development of a numerical modeling toolset called ASCEM (Advanced Simulation Capability for Environmental Management) to support modeling analyses at legacy waste sites. ASCEM is an open source and modular computing framework that incorporates new advances and tools for predicting contaminant fate and transport in natural and engineered systems. The ASCEM toolset includes both a Platform with Integrated Toolsets (called Akuna) and a High-Performance Computing multi-process simulator (called Amanzi). The focus of this presentation is on Akuna, an open-source user environment that manages subsurface simulation workflows and associated data and metadata. In this presentation, key elements of Akuna are demonstrated, which includes toolsets for model setup, database management, sensitivity analysis, parameter estimation, uncertainty quantification, and visualization of both model setup and simulation results. A key component of the workflow is in the automated job launching and monitoring capabilities, which allow a user to submit and monitor simulation runs on high-performance, parallel computers. Visualization of large outputs can also be performed without moving data back to local resources. These capabilities make high-performance computing accessible to the users who might not be familiar with batch queue systems and usage protocols on different supercomputers and clusters.
NASA Astrophysics Data System (ADS)
Garzoglio, Gabriele; Levshina, Tanya; Rynge, Mats; Sehgal, Chander; Slyz, Marko
2012-12-01
The Open Science Grid (OSG) supports a diverse community of new and existing users in adopting and making effective use of the Distributed High Throughput Computing (DHTC) model. The LHC user community has deep local support within the experiments. For other smaller communities and individual users the OSG provides consulting and technical services through the User Support area. We describe these sometimes successful and sometimes not so successful experiences and analyze lessons learned that are helping us improve our services. The services offered include forums to enable shared learning and mutual support, tutorials and documentation for new technology, and troubleshooting of problematic or systemic failure modes. For new communities and users, we bootstrap their use of the distributed high throughput computing technologies and resources available on the OSG by following a phased approach. We first adapt the application and run a small production campaign on a subset of “friendly” sites. Only then do we move the user to run full production campaigns across the many remote sites on the OSG, adding to the community resources up to hundreds of thousands of CPU hours per day. This scaling up generates new challenges - like no determinism in the time to job completion, and diverse errors due to the heterogeneity of the configurations and environments - so some attention is needed to get good results. We cover recent experiences with image simulation for the Large Synoptic Survey Telescope (LSST), small-file large volume data movement for the Dark Energy Survey (DES), civil engineering simulation with the Network for Earthquake Engineering Simulation (NEES), and accelerator modeling with the Electron Ion Collider group at BNL. We will categorize and analyze the use cases and describe how our processes are evolving based on lessons learned.
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Galaxy Zoo: An Experiment in Public Science Participation
NASA Astrophysics Data System (ADS)
Raddick, Jordan; Lintott, C. J.; Schawinski, K.; Thomas, D.; Nichol, R. C.; Andreescu, D.; Bamford, S.; Land, K. R.; Murray, P.; Slosar, A.; Szalay, A. S.; Vandenberg, J.; Galaxy Zoo Team
2007-12-01
An interesting question in modern astrophysics research is the relationship between a galaxy's morphology (appearance) and its formation and evolutionary history. Research into this question is complicated by the fact that to get a study sample, researchers must first assign a shape to a large number of galaxies. Classifying a galaxy by shape is nearly impossible for a computer, but easy for a human - however, looking at one million galaxies, one at a time, would take an enormous amount of time. To create such a research sample, we turned to citizen science. We created a web site called Galaxy Zoo (www.galaxyzoo.org) that invites the public to classify the galaxies. New members see a short tutorial and take a short skill test where they classify galaxies of known types. Once they pass the test, they begin to work with the entire sample. The site's interface shows the user an image of a single galaxy from the Sloan Digital Sky Survey. The user clicks a button to classify it. Each classification is stored in a database, associated with the galaxy that it describes. The site has become enormously popular with amateur astronomers, teachers, and others interested in astronomy. So far, more than 110,000 users have joined. We have started a forum where users share images of their favorite galaxies, ask science questions of each other and the "zookeepers," and share classification advice. In a separate poster, we will share science results from the site's first six months of operation. In this poster, we will describe the site as an experiment in public science outreach. We will share user feedback, discuss our plans to study the user community more systematically, and share advice on how to work with citizen science projects to the mutual benefit of both professional and citizen scientists.
Computer systems for annotation of single molecule fragments
Schwartz, David Charles; Severin, Jessica
2016-07-19
There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.
SeedVicious: Analysis of microRNA target and near-target sites.
Marco, Antonio
2018-01-01
Here I describe seedVicious, a versatile microRNA target site prediction software that can be easily fitted into annotation pipelines and run over custom datasets. SeedVicious finds microRNA canonical sites plus other, less efficient, target sites. Among other novel features, seedVicious can compute evolutionary gains/losses of target sites using maximum parsimony, and also detect near-target sites, which have one nucleotide different from a canonical site. Near-target sites are important to study population variation in microRNA regulation. Some analyses suggest that near-target sites may also be functional sites, although there is no conclusive evidence for that, and they may actually be target alleles segregating in a population. SeedVicious does not aim to outperform but to complement existing microRNA prediction tools. For instance, the precision of TargetScan is almost doubled (from 11% to ~20%) when we filter predictions by the distance between target sites using this program. Interestingly, two adjacent canonical target sites are more likely to be present in bona fide target transcripts than pairs of target sites at slightly longer distances. The software is written in Perl and runs on 64-bit Unix computers (Linux and MacOS X). Users with no computing experience can also run the program in a dedicated web-server by uploading custom data, or browse pre-computed predictions. SeedVicious and its associated web-server and database (SeedBank) are distributed under the GPL/GNU license.
The OSG open facility: A sharing ecosystem
Jayatilaka, B.; Levshina, T.; Rynge, M.; ...
2015-12-23
The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less
Hestand, Matthew S; van Galen, Michiel; Villerius, Michel P; van Ommen, Gert-Jan B; den Dunnen, Johan T; 't Hoen, Peter AC
2008-01-01
Background The identification of transcription factor binding sites is difficult since they are only a small number of nucleotides in size, resulting in large numbers of false positives and false negatives in current approaches. Computational methods to reduce false positives are to look for over-representation of transcription factor binding sites in a set of similarly regulated promoters or to look for conservation in orthologous promoter alignments. Results We have developed a novel tool, "CORE_TF" (Conserved and Over-REpresented Transcription Factor binding sites) that identifies common transcription factor binding sites in promoters of co-regulated genes. To improve upon existing binding site predictions, the tool searches for position weight matrices from the TRANSFACR database that are over-represented in an experimental set compared to a random set of promoters and identifies cross-species conservation of the predicted transcription factor binding sites. The algorithm has been evaluated with expression and chromatin-immunoprecipitation on microarray data. We also implement and demonstrate the importance of matching the random set of promoters to the experimental promoters by GC content, which is a unique feature of our tool. Conclusion The program CORE_TF is accessible in a user friendly web interface at . It provides a table of over-represented transcription factor binding sites in the users input genes' promoters and a graphical view of evolutionary conserved transcription factor binding sites. In our test data sets it successfully predicts target transcription factors and their binding sites. PMID:19036135
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Environmental Management System
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
2011-01-01
infonnation on the use of social media, . social networking sites were searched to review online forwns. These forums assisted in identifying some of the...tools such as social networking sites , social media, user-generated contentj social softwC\\.fe, email, instant messaging, and discussion forums (e.g...social media on official computers. The ban was put into effect because, as was stated, "[social media and social networking sites ] in general are a
Gagnon, Hélène; Godin, Gaston; Alary, Michel; Bruneau, Julie; Otis, Joanne
2010-06-01
The aim of this study was to evaluate the efficacy of a theory-based intervention to increase the use of a new syringe for each injection among injection drug users (IDUs). Users of two needle exchange programs (NEPs) were involved. At both sites, participants were assigned at random to either the experimental or the control group. Once a week for four weeks, users reported to the NEPs where they logged onto a computer and received an audiovisual message. A total of 260 IDUs were recruited. At baseline, 52.3% of participants reported that they had not always used new syringes in the previous week. The results indicate that it is possible for IDUs to adopt safer injection practices. One month after the intervention began, participants in the experimental group were using fewer dirty syringes compared to the control group (RR: 0.47 CI(95%) 0.28-0.79; P = .004). This short-term effect was no longer present 3 months later.
The OSG Open Facility: an on-ramp for opportunistic scientific computing
NASA Astrophysics Data System (ADS)
Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.
2017-10-01
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.
The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayatilaka, B.; Levshina, T.; Sehgal, C.
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less
Merritt, M.L.
1977-01-01
A computerized index of water-data collection activities and retrieval software to generate publication list of this information was developed for Florida. This system serves a vital need in the administration of the many and diverse water-data collection activities. Previously, needed data was very difficult to assemble for use in program planning or project implementation. Largely descriptive, the report tells how a file of computer card images has been established which contains entries for all sites in Florida at which there is currently a water-data-collection activity. Entries include information such as identification number, station name, location, type of site, county, information about data collection, funding, and other pertinent details. The computer program FINDEX selectively retrieves entries and lists them in a format suitable for publication. Updating the index is done routinely. (Woodard-USGS)
Laboratory Directed Research & Development (LDRD)
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Nuclear Deterrence and Stockpile Stewardship
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Emerging Threats and Opportunities
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Protecting Against Nuclear Threats
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Exploring Architectural Details Through a Wearable Egocentric Vision Device
Alletto, Stefano; Abati, Davide; Serra, Giuseppe; Cucchiara, Rita
2016-01-01
Augmented user experiences in the cultural heritage domain are in increasing demand by the new digital native tourists of 21st century. In this paper, we propose a novel solution that aims at assisting the visitor during an outdoor tour of a cultural site using the unique first person perspective of wearable cameras. In particular, the approach exploits computer vision techniques to retrieve the details by proposing a robust descriptor based on the covariance of local features. Using a lightweight wearable board, the solution can localize the user with respect to the 3D point cloud of the historical landmark and provide him with information about the details at which he is currently looking. Experimental results validate the method both in terms of accuracy and computational effort. Furthermore, user evaluation based on real-world experiments shows that the proposal is deemed effective in enriching a cultural experience. PMID:26901197
Exploring Architectural Details Through a Wearable Egocentric Vision Device.
Alletto, Stefano; Abati, Davide; Serra, Giuseppe; Cucchiara, Rita
2016-02-17
Augmented user experiences in the cultural heritage domain are in increasing demand by the new digital native tourists of 21st century. In this paper, we propose a novel solution that aims at assisting the visitor during an outdoor tour of a cultural site using the unique first person perspective of wearable cameras. In particular, the approach exploits computer vision techniques to retrieve the details by proposing a robust descriptor based on the covariance of local features. Using a lightweight wearable board, the solution can localize the user with respect to the 3D point cloud of the historical landmark and provide him with information about the details at which he is currently looking. Experimental results validate the method both in terms of accuracy and computational effort. Furthermore, user evaluation based on real-world experiments shows that the proposal is deemed effective in enriching a cultural experience.
RAP: RNA-Seq Analysis Pipeline, a new cloud-based NGS web application.
D'Antonio, Mattia; D'Onorio De Meo, Paolo; Pallocca, Matteo; Picardi, Ernesto; D'Erchia, Anna Maria; Calogero, Raffaele A; Castrignanò, Tiziana; Pesole, Graziano
2015-01-01
The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export analyzed data, according to the user needs.
RAP: RNA-Seq Analysis Pipeline, a new cloud-based NGS web application
2015-01-01
Background The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). Moreover, the huge volume of data generated by NGS platforms introduces unprecedented computational and technological challenges to efficiently analyze and store sequence data and results. Methods In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Results Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export analyzed data, according to the user needs. PMID:26046471
Spring, 1980, DECUS symposium review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, M.J.; Duffy, J.M.; McDonald, W.M.
1980-10-24
The Digital Equipment Computer Users Society (DECUS) holds biannual symposia where its membership and the host company can exchange ideas, problems, and solutions. This report by the newly formed DECUS Local User Group at LLL collects information gathered at the Spring '80 symposium in Chicago on April 22-25. Information is presented for the following special interest groups (SIGs): RSX/IAS SIG, VAX/VSM SIG, PASCAL (languages) SIG, networks SIG, TECO SIG, LSI-11 SIG, RT-11 SIG, site manager SIG, and database SIG. (RWR)
Tiny plastic lung mimics human pulmonary function
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Science and Innovation at Los Alamos
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Public Reading Room: Environmental Documents, Reports
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Incorporating a Human-Computer Interaction Course into Software Development Curriculums
ERIC Educational Resources Information Center
Janicki, Thomas N.; Cummings, Jeffrey; Healy, R. Joseph
2015-01-01
Individuals have increasing options on retrieving information related to hardware and software. Specific hardware devices include desktops, tablets and smart devices. Also, the number of software applications has significantly increased the user's capability to access data. Software applications include the traditional web site, smart device…
NASA Astrophysics Data System (ADS)
van Lew, Baldur; Botha, Charl P.; Milles, Julien R.; Vrooman, Henri A.; van de Giessen, Martijn; Lelieveldt, Boudewijn P. F.
2015-03-01
The cohort size required in epidemiological imaging genetics studies often mandates the pooling of data from multiple hospitals. Patient data, however, is subject to strict privacy protection regimes, and physical data storage may be legally restricted to a hospital network. To enable biomarker discovery, fast data access and interactive data exploration must be combined with high-performance computing resources, while respecting privacy regulations. We present a system using fast and inherently secure light-paths to access distributed data, thereby obviating the need for a central data repository. A secure private cloud computing framework facilitates interactive, computationally intensive exploration of this geographically distributed, privacy sensitive data. As a proof of concept, MRI brain imaging data hosted at two remote sites were processed in response to a user command at a third site. The system was able to automatically start virtual machines, run a selected processing pipeline and write results to a user accessible database, while keeping data locally stored in the hospitals. Individual tasks took approximately 50% longer compared to a locally hosted blade server but the cloud infrastructure reduced the total elapsed time by a factor of 40 using 70 virtual machines in the cloud. We demonstrated that the combination light-path and private cloud is a viable means of building an analysis infrastructure for secure data analysis. The system requires further work in the areas of error handling, load balancing and secure support of multiple users.
DNA Compass: a secure, client-side site for navigating personal genetic information
Curnin, Charles; Gordon, Assaf; Erlich, Yaniv
2017-01-01
Abstract Motivation: Millions of individuals have access to raw genomic data using direct-to-consumer companies. The advent of large-scale sequencing projects, such as the Precision Medicine Initiative, will further increase the number of individuals with access to their own genomic information. However, querying genomic data requires a computer terminal and computational skill to analyze the data—an impediment for the general public. Results: DNA Compass is a website designed to empower the public by enabling simple navigation of personal genomic data. Users can query the status of their genomic variants for over 1658 markers or tens of millions of documented single nucleotide polymorphisms (SNPs). DNA Compass presents the relevant genotypes of the user side-by-side with explanatory scientific resources. The genotype data never leaves the user’s computer, a feature that provides improved security and performance. More than 12 000 unique users, mainly from the general genetic genealogy community, have already used DNA Compass, demonstrating its utility. Availability and Implementation: DNA Compass is freely available on https://compass.dna.land. Contact: yaniv@cs.columbia.edu PMID:28334237
Work-site health promotion of frequent computer users: comparing selected interventions.
Blasche, Gerhard; Pfeffer, Manuela; Thaler, Helga; Gollner, Erwin
2013-01-01
Frequent computer use is associated with an increase in musculoskeletal complaints. The present study aims at comparing the relative efficacy of three novel interventions for the preventions of musculoskeletal complaints in frequent computer users. 93 employees (56 woman, 37 men, mean age 40.1 ± 8.8 years) with frequent computer use. Participants were assigned on the basis of preference to one of the following interventions of 8 week duration: Nordic Walking (NW), biofeedback assisted relaxation and stretching (BFB), balance exercises on a wobble board (BAL) or a waiting list control group. Outcome measures were musculoskeletal complaints, emotional well-being, fatigue, job dissatisfaction as well as neuromuscular activity in the neck/shoulder region at rest and during computer work assessed before and after the intervention and at 3 months follow-up. The average number of training-units per week was 2.2 ± 0.8, 5.5 ± 3.5 and 4.1 ± 2.9 for NW, BFB and BAL, respectively. NW led to short and medium term improvement of musculoskeletal complaints, BFB to a short term improvement of musculoskeletal complaints. Effects on the well-being related variables or on neuromuscular activity were not found. BAL had no effect on the studied variables. NW and to a limited extent BFB are interventions potentially useful for reducing musculoskeletal complaints in frequent computer users.
Intelligent Embedded Instruction for Computer-Aided Design (CAD) systems
1988-10-01
difficulties were predicted and six lessons were prepared that were aimed at preventing error pattern formation. The lessons were programmed in AUTOLISP ...and arcs, angles of lines, layering (linetype and color), and block creation and insertion. A program written in AUTOLISP examined the values in the...One site had AutoCAD reference manuals nearby and others had no manuals . * Only one site set a schedule for the users. * The attitudes of managers
Shieh, Fwu-Shan; Jongeneel, Patrick; Steffen, Jamin D; Lin, Selena; Jain, Surbhi; Song, Wei; Su, Ying-Hsiu
2017-01-01
Identification of viral integration sites has been important in understanding the pathogenesis and progression of diseases associated with particular viral infections. The advent of next-generation sequencing (NGS) has enabled researchers to understand the impact that viral integration has on the host, such as tumorigenesis. Current computational methods to analyze NGS data of virus-host junction sites have been limited in terms of their accessibility to a broad user base. In this study, we developed a software application (named ChimericSeq), that is the first program of its kind to offer a graphical user interface, compatibility with both Windows and Mac operating systems, and optimized for effectively identifying and annotating virus-host chimeric reads within NGS data. In addition, ChimericSeq's pipeline implements custom filtering to remove artifacts and detect reads with quantitative analytical reporting to provide functional significance to discovered integration sites. The improved accessibility of ChimericSeq through a GUI interface in both Windows and Mac has potential to expand NGS analytical support to a broader spectrum of the scientific community.
Shieh, Fwu-Shan; Jongeneel, Patrick; Steffen, Jamin D.; Lin, Selena; Jain, Surbhi; Song, Wei
2017-01-01
Identification of viral integration sites has been important in understanding the pathogenesis and progression of diseases associated with particular viral infections. The advent of next-generation sequencing (NGS) has enabled researchers to understand the impact that viral integration has on the host, such as tumorigenesis. Current computational methods to analyze NGS data of virus-host junction sites have been limited in terms of their accessibility to a broad user base. In this study, we developed a software application (named ChimericSeq), that is the first program of its kind to offer a graphical user interface, compatibility with both Windows and Mac operating systems, and optimized for effectively identifying and annotating virus-host chimeric reads within NGS data. In addition, ChimericSeq’s pipeline implements custom filtering to remove artifacts and detect reads with quantitative analytical reporting to provide functional significance to discovered integration sites. The improved accessibility of ChimericSeq through a GUI interface in both Windows and Mac has potential to expand NGS analytical support to a broader spectrum of the scientific community. PMID:28829778
The StratusLab cloud distribution: Use-cases and support for scientific applications
NASA Astrophysics Data System (ADS)
Floros, E.
2012-04-01
The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.
"One-Stop Shopping" for Ocean Remote-Sensing and Model Data
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Vu, Quoc; Chao, Yi; Li, Zhi-Jin; Choi, Jei-Kook
2006-01-01
OurOcean Portal 2.0 (http:// ourocean.jpl.nasa.gov) is a software system designed to enable users to easily gain access to ocean observation data, both remote-sensing and in-situ, configure and run an Ocean Model with observation data assimilated on a remote computer, and visualize both the observation data and the model outputs. At present, the observation data and models focus on the California coastal regions and Prince William Sound in Alaska. This system can be used to perform both real-time and retrospective analyses of remote-sensing data and model outputs. OurOcean Portal 2.0 incorporates state-of-the-art information technologies (IT) such as MySQL database, Java Web Server (Apache/Tomcat), Live Access Server (LAS), interactive graphics with Java Applet at the Client site and MatLab/GMT at the server site, and distributed computing. OurOcean currently serves over 20 real-time or historical ocean data products. The data are served in pre-generated plots or their native data format. For some of the datasets, users can choose different plotting parameters and produce customized graphics. OurOcean also serves 3D Ocean Model outputs generated by ROMS (Regional Ocean Model System) using LAS. The Live Access Server (LAS) software, developed by the Pacific Marine Environmental Laboratory (PMEL) of the National Oceanic and Atmospheric Administration (NOAA), is a configurable Web-server program designed to provide flexible access to geo-referenced scientific data. The model output can be views as plots in horizontal slices, depth profiles or time sequences, or can be downloaded as raw data in different data formats, such as NetCDF, ASCII, Binary, etc. The interactive visualization is provided by graphic software, Ferret, also developed by PMEL. In addition, OurOcean allows users with minimal computing resources to configure and run an Ocean Model with data assimilation on a remote computer. Users may select the forcing input, the data to be assimilated, the simulation period, and the output variables and submit the model to run on a backend parallel computer. When the run is complete, the output will be added to the LAS server for
NASA Astrophysics Data System (ADS)
Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Tovar, Benjamin; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2017-10-01
The University of Notre Dame (ND) CMS group operates a modest-sized Tier-3 site suitable for local, final-stage analysis of CMS data. However, through the ND Center for Research Computing (CRC), Notre Dame researchers have opportunistic access to roughly 25k CPU cores of computing and a 100 Gb/s WAN network link. To understand the limits of what might be possible in this scenario, we undertook to use these resources for a wide range of CMS computing tasks from user analysis through large-scale Monte Carlo production (including both detector simulation and data reconstruction.) We will discuss the challenges inherent in effectively utilizing CRC resources for these tasks and the solutions deployed to overcome them.
Panel: If I Only Knew Then What I Know Now
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
NASA Astrophysics Data System (ADS)
Li, Haiqing; Chatterjee, Samir
With rapid advances in information and communication technology, computer-mediated communication (CMC) technologies are utilizing multiple IT platforms such as email, websites, cell-phones/PDAs, social networking sites, and gaming environments. However, no studies have compared the effectiveness of a persuasive system using such alternative channels and various persuasive techniques. Moreover, how affective computing impacts the effectiveness of persuasive systems is not clear. This study proposes (1) persuasive technology channels in combination with persuasive strategies will have different persuasive effectiveness; (2) Adding positive emotion to a message that leads to a better overall user experience could increase persuasive effectiveness. The affective computing or emotion information was added to the experiment using emoticons. The initial results of a pilot study show that computer-mediated communication channels along with various persuasive strategies can affect the persuasive effectiveness to varying degrees. These results also shows that adding a positive emoticon to a message leads to a better user experience which increases the overall persuasive effectiveness of a system.
Geiger, Linda H.
1983-01-01
The report is an update of U.S. Geological Survey Open-File Report 77-703, which described a retrieval program for administrative index of active data-collection sites in Florida. Extensive changes to the Findex system have been made since 1977 , making the previous report obsolete. A description of the data base and computer programs that are available in the Findex system are documented in this report. This system serves a vital need in the administration of the many and diverse water-data collection activities. District offices with extensive data-collection activities will benefit from the documentation of the system. Largely descriptive, the report tells how a file of computer card images has been established which contains entries for all sites in Florida at which there is currently a water-data collection activity. Entries include information such as identification number, station name, location, type of site, county, frequency of data collection, funding, and other pertinent details. The computer program FINDEX selectively retrieves entries and lists them in a format suitable for publication. The index is updated routinely. (USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less
Computer Program for Point Location And Calculation of ERror (PLACER)
Granato, Gregory E.
1999-01-01
A program designed for point location and calculation of error (PLACER) was developed as part of the Quality Assurance Program of the Federal Highway Administration/U.S. Geological Survey (USGS) National Data and Methodology Synthesis (NDAMS) review process. The program provides a standard method to derive study-site locations from site maps in highwayrunoff, urban-runoff, and other research reports. This report provides a guide for using PLACER, documents methods used to estimate study-site locations, documents the NDAMS Study-Site Locator Form, and documents the FORTRAN code used to implement the method. PLACER is a simple program that calculates the latitude and longitude coordinates of one or more study sites plotted on a published map and estimates the uncertainty of these calculated coordinates. PLACER calculates the latitude and longitude of each study site by interpolating between the coordinates of known features and the locations of study sites using any consistent, linear, user-defined coordinate system. This program will read data entered from the computer keyboard and(or) from a formatted text file, and will write the results to the computer screen and to a text file. PLACER is readily transferable to different computers and operating systems with few (if any) modifications because it is written in standard FORTRAN. PLACER can be used to calculate study site locations in latitude and longitude, using known map coordinates or features that are identifiable in geographic information data bases such as USGS Geographic Names Information System, which is available on the World Wide Web.
Carrasco, Alejandro; Jalali, Elnaz; Dhingra, Ajay; Lurie, Alan; Yadav, Sumit; Tadinada, Aditya
2017-06-01
The aim of this study was to compare a medical-grade PACS (picture archiving and communication system) monitor, a consumer-grade monitor, a laptop computer, and a tablet computer for linear measurements of height and width for specific implant sites in the posterior maxilla and mandible, along with visualization of the associated anatomical structures. Cone beam computed tomography (CBCT) scans were evaluated. The images were reviewed using PACS-LCD monitor, consumer-grade LCD monitor using CB-Works software, a 13″ MacBook Pro, and an iPad 4 using OsiriX DICOM reader software. The operators had to identify anatomical structures in each display using a 2-point scale. User experience between PACS and iPad was also evaluated by means of a questionnaire. The measurements were very similar for each device. P-values were all greater than 0.05, indicating no significant difference between the monitors for each measurement. The intraoperator reliability was very high. The user experience was similar in each category with the most significant difference regarding the portability where the PACS display received the lowest score and the iPad received the highest score. The iPad with retina display was comparable with the medical-grade monitor, producing similar measurements and image visualization, and thus providing an inexpensive, portable, and reliable screen to analyze CBCT images in the operating room during the implant surgery.
Leveraging Social Networking in the United States Army
2011-03-16
ABSTRACT In 2007, the U. S. Department of Defense (DoD) began blocking social networking sites such as including YouTube and MySpace from its computer...issued a memorandum that set a new policy allowing access to social - networking services ( SNS ) from its network . The policy allows all users of...CLASSIFICATION: Unclassified In 2007, the U. S. Department of Defense (DoD) began blocking social networking sites such as including YouTube and MySpace
Li, Fuyi; Li, Chen; Marquez-Lago, Tatiana T; Leier, André; Akutsu, Tatsuya; Purcell, Anthony W; Smith, A Ian; Lithgow, Trevor; Daly, Roger J; Song, Jiangning; Chou, Kuo-Chen
2018-06-27
Kinase-regulated phosphorylation is a ubiquitous type of post-translational modification (PTM) in both eukaryotic and prokaryotic cells. Phosphorylation plays fundamental roles in many signalling pathways and biological processes, such as protein degradation and protein-protein interactions. Experimental studies have revealed that signalling defects caused by aberrant phosphorylation are highly associated with a variety of human diseases, especially cancers. In light of this, a number of computational methods aiming to accurately predict protein kinase family-specific or kinase-specific phosphorylation sites have been established, thereby facilitating phosphoproteomic data analysis. In this work, we present Quokka, a novel bioinformatics tool that allows users to rapidly and accurately identify human kinase family-regulated phosphorylation sites. Quokka was developed by using a variety of sequence scoring functions combined with an optimized logistic regression algorithm. We evaluated Quokka based on well-prepared up-to-date benchmark and independent test datasets, curated from the Phospho.ELM and UniProt databases, respectively. The independent test demonstrates that Quokka improves the prediction performance compared with state-of-the-art computational tools for phosphorylation prediction. In summary, our tool provides users with high-quality predicted human phosphorylation sites for hypothesis generation and biological validation. The Quokka webserver and datasets are freely available at http://quokka.erc.monash.edu/. Supplementary data are available at Bioinformatics online.
G-LoSA for Prediction of Protein-Ligand Binding Sites and Structures.
Lee, Hui Sun; Im, Wonpil
2017-01-01
Recent advances in high-throughput structure determination and computational protein structure prediction have significantly enriched the universe of protein structure. However, there is still a large gap between the number of available protein structures and that of proteins with annotated function in high accuracy. Computational structure-based protein function prediction has emerged to reduce this knowledge gap. The identification of a ligand binding site and its structure is critical to the determination of a protein's molecular function. We present a computational methodology for predicting small molecule ligand binding site and ligand structure using G-LoSA, our protein local structure alignment and similarity measurement tool. All the computational procedures described here can be easily implemented using G-LoSA Toolkit, a package of standalone software programs and preprocessed PDB structure libraries. G-LoSA and G-LoSA Toolkit are freely available to academic users at http://compbio.lehigh.edu/GLoSA . We also illustrate a case study to show the potential of our template-based approach harnessing G-LoSA for protein function prediction.
High Precision Prediction of Functional Sites in Protein Structures
Buturovic, Ljubomir; Wong, Mike; Tang, Grace W.; Altman, Russ B.; Petkovic, Dragutin
2014-01-01
We address the problem of assigning biological function to solved protein structures. Computational tools play a critical role in identifying potential active sites and informing screening decisions for further lab analysis. A critical parameter in the practical application of computational methods is the precision, or positive predictive value. Precision measures the level of confidence the user should have in a particular computed functional assignment. Low precision annotations lead to futile laboratory investigations and waste scarce research resources. In this paper we describe an advanced version of the protein function annotation system FEATURE, which achieved 99% precision and average recall of 95% across 20 representative functional sites. The system uses a Support Vector Machine classifier operating on the microenvironment of physicochemical features around an amino acid. We also compared performance of our method with state-of-the-art sequence-level annotator Pfam in terms of precision, recall and localization. To our knowledge, no other functional site annotator has been rigorously evaluated against these key criteria. The software and predictive models are incorporated into the WebFEATURE service at http://feature.stanford.edu/wf4.0-beta. PMID:24632601
FermiGrid—experience and future plans
NASA Astrophysics Data System (ADS)
Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.
2008-07-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.
FermiGrid - experience and future plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chadwick, K.; Berman, E.; Canal, P.
2007-09-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less
Dauz, Emily; Moore, Jan; Smith, Carol E; Puno, Florence; Schaag, Helen
2004-01-01
This article describes the experiences of nurses who, as part of a large clinical trial, brought the Internet into older adults' homes by installing a computer, if needed, and connecting to a patient education Web site. Most of these patients had not previously used the Internet and were taught even basic computer skills when necessary. Because of increasing use of the Internet in patient education, assessment, and home monitoring, nurses in various roles currently connect with patients to monitor their progress, teach about medications, and answer questions about appointments and treatments. Thus, nurses find themselves playing the role of technology managers for patients with home-based Internet connections. This article provides step-by-step procedures for computer installation and training in the form of protocols, checklists, and patient user guides. By following these procedures, nurses can install computers, arrange Internet access, teach and connect to their patients, and prepare themselves to install future generations of technological devices.
Computational Prediction of Protein-Protein Interactions
Ehrenberger, Tobias; Cantley, Lewis C.; Yaffe, Michael B.
2015-01-01
The prediction of protein-protein interactions and kinase-specific phosphorylation sites on individual proteins is critical for correctly placing proteins within signaling pathways and networks. The importance of this type of annotation continues to increase with the continued explosion of genomic and proteomic data, particularly with emerging data categorizing posttranslational modifications on a large scale. A variety of computational tools are available for this purpose. In this chapter, we review the general methodologies for these types of computational predictions and present a detailed user-focused tutorial of one such method and computational tool, Scansite, which is freely available to the entire scientific community over the Internet. PMID:25859943
How to use the Stand-Damage Model: Version 2.0. (Computer program)
J.J. Colbert; George Racin
2001-01-01
The Stand-Damage Model simulates the growth of a forest stand, a spatially homogeneous collection of trees growing on a site. The model simulates growth from an initial inventory, user-prescribed management practices, and the effects of gypsy moth defoliation. Here we provide installation and operating instructions for Version 2.0.
Shuttle OFT Level C navigation requirements
NASA Technical Reports Server (NTRS)
1980-01-01
Detailed requirements for the orbital operations computer loads, OPS 2, and OPS 8 are given. These requirements represent the total on-orbit/rendezvous navigation baseline requirements for the following principal functions: on-orbital/rendezvous navigation sequencer; on-orbit/rendezvous UPP sequencer; on-orbit rendezvous navigation; on-orbit prediction; on-orbit user parameter processing; and landing Site update.
Human thermal comfort in urban outdoor spaces
Lee P. Herrington; J. S. Vittum
1977-01-01
Measurements of the physical environment of urban open spaces in Syracuse, New York, were used to compute the physiological responses of human users of the spaces. These calculations were then used to determine what environmental variables were both important to human comfort and susceptible to control by site design. Although air temperature and humidity are important...
Real-Time Mapping alert system; user's manual
Torres, L.A.
1996-01-01
The U.S. Geological Survey has an extensive hydrologic network that records and transmits precipitation, stage, discharge, and other water- related data on a real-time basis to an automated data processing system. Data values are recorded on electronic data collection platforms at field monitoring sites. These values are transmitted by means of orbiting satellites to receiving ground stations, and by way of telecommunication lines to a U.S. Geological Survey office where they are processed on a computer system. Data that exceed predefined thresholds are identified as alert values. These alert values can help keep water- resource specialists informed of current hydrologic conditions. The current alert status at monitoring sites is of critical importance during floods, hurricanes, and other extreme hydrologic events where quick analysis of the situation is needed. This manual provides instructions for using the Real-Time Mapping software, a series of computer programs developed by the U.S. Geological Survey for quick analysis of hydrologic conditions, and guides users through a basic interactive session. The software provides interactive graphics display and query of real-time information in a map-based, menu-driven environment.
McAlearney, Ann Scheck; Schweikhart, Sharon B; Medow, Mitchell A
2005-01-01
To describe strategies that organizations select to support physicians' use of handheld computers (HHCs) in clinical practice and to explore issues about facilitating HHC use. A multidisciplinary team used focus groups and interviews with clinical, administrative, and information technology (IT) staff to gather data from 161 informants at seven sites. Transcripts were coded using a combination of deductive and inductive approaches to both answer research questions and identify patterns and themes that emerged in the data. Answers to questions about strategies for HHC support and themes about (1) how to facilitate physician adoption and use and (2) organizational concerns. Three main organizational strategies for HHC support were characterized among sites: (1) active support for broad-based use, (2) active support for niche use, and (3) basic support for individual physician users. Three high-level themes emerged around how to best facilitate physician adoption and use of HHCs: (1) improving usability and usefulness, (2) promoting HHCs and device use, and (3) providing training and support. However, four major themes also emerged related to organizations' concerns about HHC use: (1) security-related concerns, (2) economic concerns, (3) technical concerns, and (4) strategic concerns. An organizational approach to HHC support that involves individualized attention to existing and potential physician users rather than one-size-fits-all, organization-wide implementation efforts was an important facilitator promoting physician use of HHCs. Health care organizations interested in supporting HHC use must consider issues related to security, economics, and IT strategy that may not be prominent concerns for physician users.
Methods for Improving the User-Computer Interface. Technical Report.
ERIC Educational Resources Information Center
McCann, Patrick H.
This summary of methods for improving the user-computer interface is based on a review of the pertinent literature. Requirements of the personal computer user are identified and contrasted with computer designer perspectives towards the user. The user's psychological needs are described, so that the design of the user-computer interface may be…
High-Performance Computing User Facility | Computational Science | NREL
User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access
The Gain of Resource Delegation in Distributed Computing Environments
NASA Astrophysics Data System (ADS)
Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander
In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.
Dickinson, Jesse; Hanson, R.T.; Mehl, Steffen W.; Hill, Mary C.
2011-01-01
The computer program described in this report, MODPATH-LGR, is designed to allow simulation of particle tracking in locally refined grids. The locally refined grids are simulated by using MODFLOW-LGR, which is based on MODFLOW-2005, the three-dimensional groundwater-flow model published by the U.S. Geological Survey. The documentation includes brief descriptions of the methods used and detailed descriptions of the required input files and how the output files are typically used. The code for this model is available for downloading from the World Wide Web from a U.S. Geological Survey software repository. The repository is accessible from the U.S. Geological Survey Water Resources Information Web page at http://water.usgs.gov/software/ground_water.html. The performance of the MODPATH-LGR program has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program by using the email address available on the Web site. Updates might occasionally be made to this document and to the MODPATH-LGR program, and users should check the Web site periodically.
Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir
2011-01-01
Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353
Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations
NASA Astrophysics Data System (ADS)
Dokuz, A. S.; Celik, M.
2017-11-01
Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.
Mapping broom snakeweed through image analysis of color-infrared photography and digital imagery.
Everitt, J H; Yang, C
2007-11-01
A study was conducted on a south Texas rangeland area to evaluate aerial color-infrared (CIR) photography and CIR digital imagery combined with unsupervised image analysis techniques to map broom snakeweed [Gutierrezia sarothrae (Pursh.) Britt. and Rusby]. Accuracy assessments performed on computer-classified maps of photographic images from two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 88.3%, respectively; whereas, accuracy assessments performed on classified maps from digital images of the same two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 92.8%, respectively. These results indicate that CIR photography and CIR digital imagery combined with image analysis techniques can be used successfully to map broom snakeweed infestations on south Texas rangelands.
Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data
NASA Astrophysics Data System (ADS)
Koranda, Scott
2004-03-01
The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.
Lyons, Kenneth R; Joshi, Sanjay S
2013-06-01
Here we demonstrate the use of a new singlesignal surface electromyography (sEMG) brain-computer interface (BCI) to control a mobile robot in a remote location. Previous work on this BCI has shown that users are able to perform cursor-to-target tasks in two-dimensional space using only a single sEMG signal by continuously modulating the signal power in two frequency bands. Using the cursor-to-target paradigm, targets are shown on the screen of a tablet computer so that the user can select them, commanding the robot to move in different directions for a fixed distance/angle. A Wifi-enabled camera transmits video from the robot's perspective, giving the user feedback about robot motion. Current results show a case study with a C3-C4 spinal cord injury (SCI) subject using a single auricularis posterior muscle site to navigate a simple obstacle course. Performance metrics for operation of the BCI as well as completion of the telerobotic command task are developed. It is anticipated that this noninvasive and mobile system will open communication opportunities for the severely paralyzed, possibly using only a single sensor.
NASA Astrophysics Data System (ADS)
XIA, J.; Yang, C.; Liu, K.; Huang, Q.; Li, Z.
2013-12-01
Big Data becomes increasingly important in almost all scientific domains, especially in geoscience where hundreds to millions of sensors are collecting data of the Earth continuously (Whitehouse News 2012). With the explosive growth of data, various Geospatial Cyberinfrastructure (GCI) (Yang et al. 2010) components are developed to manage geospatial resources and provide data access for the public. These GCIs are accessed by different users intensively on a daily basis. However, little research has been done to analyze the spatiotemporal patterns of user behavior, which could be critical to the management of Big Data and the operation of GCIs (Yang et al. 2011). For example, the spatiotemporal distribution of end users helps us better arrange and locate GCI computing facilities. A better indexing and caching mechanism could be developed based on the spatiotemporal pattern of user queries. In this paper, we use GEOSS Clearinghouse as an example to investigate spatiotemporal patterns of user behavior in GCIs. The investigation results show that user behaviors are heterogeneous but with patterns across space and time. Identified patterns include (1) the high access frequency regions; (2) local interests; (3) periodical accesses and rush hours; (4) spiking access. Based on identified patterns, this presentation reports several solutions to better support the operation of the GEOSS Clearinghouse and other GCIs. Keywords: Big Data, EarthCube, CyberGIS, Spatiotemporal Thinking and Computing, Data Mining, User Behavior Reference: Fayyad, U. M., Piatetsky-Shapiro, G., Smyth, P., & Uthurusamy, R. 1996. Advances in knowledge discovery and data mining. Whitehouse. 2012. Obama administration unveils 'BIG DATA' initiative: announces $200 million in new R&D investments. Whitehouse. Retrieved from http://www.whitehouse.gov/sites/default/files/microsites/ostp/big_data_press_release_final_2.pdf [Accessed 14 June 2013] Yang, C., Wu, H., Huang, Q., Li, Z., & Li, J. 2011. Using spatial principles to optimize distributed computing for enabling the physical science discoveries. Proceedings of the National Academy of Sciences, 108(14), 5498-5503. doi:10.1073/pnas.0909315108 Yang, C., Raskin, R., Goodchild, M., & Gahegan, M. 2010. Geospatial Cyberinfrastructure: Past, present and future. Computers, Environment and Urban Systems, 34(4), 264-277. doi:10.1016/j.compenvurbsys.2010.04.001
WeaVR: a self-contained and wearable immersive virtual environment simulation system.
Hodgson, Eric; Bachmann, Eric R; Vincent, David; Zmuda, Michael; Waller, David; Calusdian, James
2015-03-01
We describe WeaVR, a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites and makes it available in any open space, such as a gymnasium or a public park. Novel hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location, with no external infrastructure and little-to-no setup or site preparation. The ability of the WeaVR system to provide realistic motion-tracked navigation for users, to improve the study of large-scale navigation, and to generate usable behavioral data is shown in three demonstrations. First, participants navigated through a full-scale virtual grocery store while physically situated in an open grass field. Trajectory data are presented for both normal tracking and for tracking during the use of redirected walking that constrained users to a predefined area. Second, users followed a straight path within a virtual world for distances of up to 2 km while walking naturally and being redirected to stay within the field, demonstrating the ability of the system to study large-scale navigation by simulating virtual worlds that are potentially unlimited in extent. Finally, the portability and pedagogical implications of this system were demonstrated by taking it to a regional high school for live use by a computer science class on their own school campus.
THREaD Mapper Studio: a novel, visual web server for the estimation of genetic linkage maps
Cheema, Jitender; Ellis, T. H. Noel; Dicks, Jo
2010-01-01
The estimation of genetic linkage maps is a key component in plant and animal research, providing both an indication of the genetic structure of an organism and a mechanism for identifying candidate genes associated with traits of interest. Because of this importance, several computational solutions to genetic map estimation exist, mostly implemented as stand-alone software packages. However, the estimation process is often largely hidden from the user. Consequently, problems such as a program crashing may occur that leave a user baffled. THREaD Mapper Studio (http://cbr.jic.ac.uk/threadmapper) is a new web site that implements a novel, visual and interactive method for the estimation of genetic linkage maps from DNA markers. The rationale behind the web site is to make the estimation process as transparent and robust as possible, while also allowing users to use their expert knowledge during analysis. Indeed, the 3D visual nature of the tool allows users to spot features in a data set, such as outlying markers and potential structural rearrangements that could cause problems with the estimation procedure and to account for them in their analysis. Furthermore, THREaD Mapper Studio facilitates the visual comparison of genetic map solutions from third party software, aiding users in developing robust solutions for their data sets. PMID:20494977
Remote Internet access to advanced analytical facilities: a new approach with Web-based services.
Sherry, N; Qin, J; Fuller, M Suominen; Xie, Y; Mola, O; Bauer, M; McIntyre, N S; Maxwell, D; Liu, D; Matias, E; Armstrong, C
2012-09-04
Over the past decade, the increasing availability of the World Wide Web has held out the possibility that the efficiency of scientific measurements could be enhanced in cases where experiments were being conducted at distant facilities. Examples of early successes have included X-ray diffraction (XRD) experimental measurements of protein crystal structures at synchrotrons and access to scanning electron microscopy (SEM) and NMR facilities by users from institutions that do not possess such advanced capabilities. Experimental control, visual contact, and receipt of results has used some form of X forwarding and/or VNC (virtual network computing) software that transfers the screen image of a server at the experimental site to that of the users' home site. A more recent development is a web services platform called Science Studio that provides teams of scientists with secure links to experiments at one or more advanced research facilities. The software provides a widely distributed team with a set of controls and screens to operate, observe, and record essential parts of the experiment. As well, Science Studio provides high speed network access to computing resources to process the large data sets that are often involved in complex experiments. The simple web browser and the rapid transfer of experimental data to a processing site allow efficient use of the facility and assist decision making during the acquisition of the experimental results. The software provides users with a comprehensive overview and record of all parts of the experimental process. A prototype network is described involving X-ray beamlines at two different synchrotrons and an SEM facility. An online parallel processing facility has been developed that analyzes the data in near-real time using stream processing. Science Studio and can be expanded to include many other analytical applications, providing teams of users with rapid access to processed results along with the means for detailed discussion of their significance.
Synchronized computational architecture for generalized bilateral control of robot arms
NASA Technical Reports Server (NTRS)
Szakaly, Zoltan F. (Inventor)
1991-01-01
A master six degree of freedom Force Reflecting Hand Controller (FRHC) is available at a master site where a received image displays, in essentially real time, a remote robotic manipulator which is being controlled in the corresponding six degree freedom by command signals which are transmitted to the remote site in accordance with the movement of the FRHC at the master site. Software is user-initiated at the master site in order to establish the basic system conditions, and then a physical movement of the FRHC in Cartesean space is reflected at the master site by six absolute numbers that are sensed, translated and computed as a difference signal relative to the earlier position. The change in position is then transmitted in that differential signal form over a high speed synchronized bilateral communication channel which simultaneously returns robot-sensed response information to the master site as forces applied to the FRHC so that the FRHC reflects the feel of what is taking place at the remote site. A system wide clock rate is selected at a sufficiently high rate that the operator at the master site experiences the Force Reflecting operation in real time.
Scaling the CERN OpenStack cloud
NASA Astrophysics Data System (ADS)
Bell, T.; Bompastor, B.; Bukowiec, S.; Castro Leon, J.; Denis, M. K.; van Eldik, J.; Fermin Lobo, M.; Fernandez Alvarez, L.; Fernandez Rodriguez, D.; Marino, A.; Moreira, B.; Noel, B.; Oulevey, T.; Takase, W.; Wiebalck, A.; Zilli, S.
2015-12-01
CERN has been running a production OpenStack cloud since July 2013 to support physics computing and infrastructure services for the site. In the past year, CERN Cloud Infrastructure has seen a constant increase in nodes, virtual machines, users and projects. This paper will present what has been done in order to make the CERN cloud infrastructure scale out.
Using VirtualGL/TurboVNC Software on the Peregrine System |
High-Performance Computing | NREL VirtualGL/TurboVNC Software on the Peregrine System Using , allowing users to access and share large-memory visualization nodes with high-end graphics processing units may be better than just using X11 forwarding when connecting from a remote site with low bandwidth and
Assessing the Quality of Academic Libraries on the Web: The Development and Testing of Criteria.
ERIC Educational Resources Information Center
Chao, Hungyune
2002-01-01
This study develops and tests an instrument useful for evaluating the quality of academic library Web sites. Discusses criteria for print materials and human-computer interfaces; user-based perspectives; the use of factor analysis; a survey of library experts; testing reliability through analysis of variance; and regression models. (Contains 53…
Quality evaluation of portal sites in health system, as a tool for education and learning.
Hejazi, Sayed Mehdi; Sarmadi, Sima
2013-01-01
The main objective of creating a portal is to make information service available for users who need them for performance of duties and responsibilities regardless of the sources. This article is attempted to consider the parameters that can evaluate these sites since these criteria can be effective in designing and implementing such portals. On the other hand, portal sites in health systems of every country make it possible for leaders, policy makers, and directors to system education as a tool for new learning technologies. One of the main decisions each manager has to make is precise selection of appropriate portal sites. This is a descriptive and qualitative study. The research sample was 53 computer professional working in the area of computer programming and design. In the first part of the study a questionnaire was send to the participants and in the second part of the study based on their response to the questionnaire the participant was interviewed and the main themes of the studies were formulated. The validity and the reliability of the questionnaire were confirmed. The study results were summarized in 10 themes and 50 sub-categories. The main themes included were portal requirements, security, management, and efficiency, user friendliness, built-in applications, portal flexibility, interoperability, and support systems. Portal sites in any education systems make it possible for health system leaders and policy makers to manage their organization information system efficiently and effectively. One of the major decisions each manager has to make is precise selection of an appropriate portal sites design and development. The themes and sub-categories of this study could help health system managers and policy makers and information technology professionals to make appropriate decisions regarding portal design and development.
Controlling user access to electronic resources without password
Smith, Fred Hewitt
2015-06-16
Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.
Study of USGS/NASA land use classification system. [computer analysis from LANDSAT data
NASA Technical Reports Server (NTRS)
Spann, G. W.
1975-01-01
The results of a computer mapping project using LANDSAT data and the USGS/NASA land use classification system are summarized. During the computer mapping portion of the project, accuracies of 67 percent to 79 percent were achieved using Level II of the classification system and a 4,000 acre test site centered on Douglasville, Georgia. Analysis of response to a questionaire circulated to actual and potential LANDSAT data users reveals several important findings: (1) there is a substantial desire for additional information related to LANDSAT capabilities; (2) a majority of the respondents feel computer mapping from LANDSAT data could aid present or future projects; and (3) the costs of computer mapping are substantially less than those of other methods.
Optimizing Resource Utilization in Grid Batch Systems
NASA Astrophysics Data System (ADS)
Gellrich, Andreas
2012-12-01
On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.
Secure Genomic Computation through Site-Wise Encryption
Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu
2015-01-01
Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients’ genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds. PMID:26306278
Secure Genomic Computation through Site-Wise Encryption.
Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu
2015-01-01
Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients' genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds.
Decentralized Grid Scheduling with Evolutionary Fuzzy Systems
NASA Astrophysics Data System (ADS)
Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander
In this paper, we address the problem of finding workload exchange policies for decentralized Computational Grids using an Evolutionary Fuzzy System. To this end, we establish a non-invasive collaboration model on the Grid layer which requires minimal information about the participating High Performance and High Throughput Computing (HPC/HTC) centers and which leaves the local resource managers completely untouched. In this environment of fully autonomous sites, independent users are assumed to submit their jobs to the Grid middleware layer of their local site, which in turn decides on the delegation and execution either on the local system or on remote sites in a situation-dependent, adaptive way. We find for different scenarios that the exchange policies show good performance characteristics not only with respect to traditional metrics such as average weighted response time and utilization, but also in terms of robustness and stability in changing environments.
POOL server: machine learning application for functional site prediction in proteins.
Somarowthu, Srinivas; Ondrechen, Mary Jo
2012-08-01
We present an automated web server for partial order optimum likelihood (POOL), a machine learning application that combines computed electrostatic and geometric information for high-performance prediction of catalytic residues from 3D structures. Input features consist of THEMATICS electrostatics data and pocket information from ConCavity. THEMATICS measures deviation from typical, sigmoidal titration behavior to identify functionally important residues and ConCavity identifies binding pockets by analyzing the surface geometry of protein structures. Both THEMATICS and ConCavity (structure only) do not require the query protein to have any sequence or structure similarity to other proteins. Hence, POOL is applicable to proteins with novel folds and engineered proteins. As an additional option for cases where sequence homologues are available, users can include evolutionary information from INTREPID for enhanced accuracy in site prediction. The web site is free and open to all users with no login requirements at http://www.pool.neu.edu. m.ondrechen@neu.edu Supplementary data are available at Bioinformatics online.
A multipurpose computing center with distributed resources
NASA Astrophysics Data System (ADS)
Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.
2017-10-01
The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.
ThermoBuild: Online Method Made Available for Accessing NASA Glenn Thermodynamic Data
NASA Technical Reports Server (NTRS)
McBride, Bonnie; Zehe, Michael J.
2004-01-01
The new Web site program "ThermoBuild" allows users to easily access and use the NASA Glenn Thermodynamic Database of over 2000 solid, liquid, and gaseous species. A convenient periodic table allows users to "build" the molecules of interest and designate the temperature range over which thermodynamic functions are to be displayed. ThermoBuild also allows users to build custom databases for use with NASA's Chemical Equilibrium with Applications (CEA) program or other programs that require the NASA format for thermodynamic properties. The NASA Glenn Research Center has long been a leader in the compilation and dissemination of up-to-date thermodynamic data, primarily for use with the NASA CEA program, but increasingly for use with other computer programs.
Analysis of field test data on residential heating and cooling
NASA Astrophysics Data System (ADS)
Talbert, S. G.
1980-12-01
The computer program using field site data collected on 48 homes located in six cities in different climatic regions of the United States is discussed. In addition, a User's Guide was prepared for the computer program which is contained in a separate two-volume document entitled User's Guide for REAP: Residential Energy Analysis Program. Feasibility studies were conducted pertaining to potential improvements for REAP, including: the addition of an oil-furnace model; improving the infiltration subroutine; adding active and/or passive solar subroutines; incorporating a thermal energy storage model; and providing dual HVAC systems (e.g., heat pump-gas furnace). The purpose of REAP is to enable building designers and energy analysts to evaluate how such factors as building design, weather conditions, internal heat loads, and HVAC equipment performance, influence the energy requirements of residential buildings.
NASA Astrophysics Data System (ADS)
Murray, Jo; Lawden, Mike
1988-06-01
This will be a year of metamorphosis for Starlink with major hardware upgrades and the widespread adoption of the ADAM software environment. Most of the original VAXs will be replaced with MicroVAX 3500 based Local Area VAX Clusters (LAVCs); also disappearing from these sites will be the old ARGS as the Digisolve Ikon becomes the principal image display. This comes ten years after the idea of co-ordinating the computing activities of the UK astronomical community was first mooted. Those pre-natal days of Starlink are remembered by Professor Mike Disney in the article which follows. Many Starlink users will be surprised to learn that a centralised computing facility with only remote access or occasional visits for users was considered! (Actually the first machine, the RAL database/communictaiont MicroVAX, began operating a user service only this year.) It is instructive to look at some of the proposals for national data-analysis facilities which were forerunners of Starlink. For example, funding of £40,000 initially with £10,000 annually was requested for computer enhancements at RGO to provide a spectral reduction service. The hardware recommendation was based on a single Interdata 10 computer with a 64KByte memory and a total of 5MByte of disc! Once again, it is a pleasure to thank all the contributors to the Bulletin, and invite our readers to contribute to forthcoming editions.
An online model composition tool for system biology models
2013-01-01
Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914
Alternative treatment technology information center computer database system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, D.
1995-10-01
The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less
Palumbo, Michael J; Newberg, Lee A
2010-07-01
The transcription of a gene from its DNA template into an mRNA molecule is the first, and most heavily regulated, step in gene expression. Especially in bacteria, regulation is typically achieved via the binding of a transcription factor (protein) or small RNA molecule to the chromosomal region upstream of a regulated gene. The protein or RNA molecule recognizes a short, approximately conserved sequence within a gene's promoter region and, by binding to it, either enhances or represses expression of the nearby gene. Since the sought-for motif (pattern) is short and accommodating to variation, computational approaches that scan for binding sites have trouble distinguishing functional sites from look-alikes. Many computational approaches are unable to find the majority of experimentally verified binding sites without also finding many false positives. Phyloscan overcomes this difficulty by exploiting two key features of functional binding sites: (i) these sites are typically more conserved evolutionarily than are non-functional DNA sequences; and (ii) these sites often occur two or more times in the promoter region of a regulated gene. The website is free and open to all users, and there is no login requirement. Address: (http://bayesweb.wadsworth.org/phyloscan/).
Top-down methodology for human factors research
NASA Technical Reports Server (NTRS)
Sibert, J.
1983-01-01
User computer interaction as a conversation is discussed. The design of user interfaces which depends on viewing communications between a user and the computer as a conversion is presented. This conversation includes inputs to the computer (outputs from the user), outputs from the computer (inputs to the user), and the sequencing in both time and space of those outputs and inputs. The conversation is viewed from the user's side of the conversation. Two languages are modeled: the one with which the user communicates with the computer and the language where communication flows from the computer to the user. Both languages exist on three levels; the semantic, syntactic and lexical. It is suggested that natural languages can also be considered in these terms.
Local storage federation through XRootD architecture for interactive distributed analysis
NASA Astrophysics Data System (ADS)
Colamaria, F.; Colella, D.; Donvito, G.; Elia, D.; Franco, A.; Luparello, G.; Maggi, G.; Miniello, G.; Vallero, S.; Vino, G.
2015-12-01
A cloud-based Virtual Analysis Facility (VAF) for the ALICE experiment at the LHC has been deployed in Bari. Similar facilities are currently running in other Italian sites with the aim to create a federation of interoperating farms able to provide their computing resources for interactive distributed analysis. The use of cloud technology, along with elastic provisioning of computing resources as an alternative to the grid for running data intensive analyses, is the main challenge of these facilities. One of the crucial aspects of the user-driven analysis execution is the data access. A local storage facility has the disadvantage that the stored data can be accessed only locally, i.e. from within the single VAF. To overcome such a limitation a federated infrastructure, which provides full access to all the data belonging to the federation independently from the site where they are stored, has been set up. The federation architecture exploits both cloud computing and XRootD technologies, in order to provide a dynamic, easy-to-use and well performing solution for data handling. It should allow the users to store the files and efficiently retrieve the data, since it implements a dynamic distributed cache among many datacenters in Italy connected to one another through the high-bandwidth national network. Details on the preliminary architecture implementation and performance studies are discussed.
Dehydration of 1-octadecanol over H-BEA: A combined experimental and computational study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Wenji; Liu, Yuanshuai; Barath, Eszter
Liquid phase dehydration of 1-octdecanol, which is intermediately formed during the hydrodeoxygenation of microalgae oil, has been explored in a combined experimental and computational study. The alkyl chain of C18 alcohol interacts with acid sites during diffusion inside the zeolite pores, resulting in an inefficient utilization of the Brønsted acid sites for samples with high acid site concentrations. The parallel intra- and inter- molecular dehydration pathways having different activation energies pass through alternative reaction intermediates. Formation of surface-bound alkoxide species is the rate-limiting step during intramolecular dehydration, whereas intermolecular dehydration proceeds via a bulky dimer intermediate. Octadecene is the primarymore » dehydration product over H-BEA at 533 K. Despite of the main contribution of Brønsted acid sites towards both dehydration pathways, Lewis acid sites are also active in the formation of dioctadecyl ether. The intramolecular dehydration to octadecene and cleavage of the intermediately formed ether, however, require strong BAS. L. Wang, D. Mei and J. A. Lercher, acknowledge the partial support from the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC). EMSL is a national scientific user facility located at Pacific Northwest National Laboratory (PNNL) and sponsored by DOE’s Office of Biological and Environmental Research.« less
The Practical Obstacles of Data Transfer: Why researchers still love scp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T
The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less
Prototype of a Mobile Social Network for Education Using Dynamic Web Service
ERIC Educational Resources Information Center
Hoentsch, Sandra Costa Pinto; Carvalho, Felipe Oliveira; Santos, Luiz Marcus Monteiro de Almeida; Ribeiro, Admilson de Ribamar Lima
2012-01-01
This article presents the proposal of a social network site SocialNetLab that belongs to the Department of Computing-Federal University of Sergipe and which aims to locate and notify users of a nearby friend independently of the location technology available in the equipment through dynamic Web Service; to serve as a laboratory for research in…
User's guide to Version 2 of the Regeneration Establishment Model: Part of the Prognosis Model
Dennis E. Ferguson; Nicholas L. Crookston
1991-01-01
This publication describes how to use version 2 of the Regeneration Establishment Model, a computer-based simulator that is part of the Prognosis Model for Stand Development. Conifer regeneration is predicted following harvest and site preparation for forests in western Montana, central Idaho, and northern Idaho. The influence of western spruce budworm (Choristoneura...
Users' Perspectives on Tour-Guide Training Courses Using 3D Tourist Sites
ERIC Educational Resources Information Center
Chen, Yu-Fen; Mo, Huai-en
2014-01-01
Taiwan is currently attempting to develop itself into a twenty-first century tourist hub to take advantage of today's thriving global tourism economy. In the coming years, Taiwan anticipates an urgent demand for tour guides, and there is a clear need for training solutions that can serve a rapidly growing population. Computer-mediated virtual 3D…
VizieR Online Data Catalog: GalIMF version 1.0.0 (Yan+, 2017)
NASA Astrophysics Data System (ADS)
Yan, Z.; Jerabkova, T.; Kroupa, P.
2017-08-01
GalIMF stands for the Galaxy-wide Initial Mass Function. It is a Python 3 module that allows users to compute galaxy-wide initial stellar mass functions based on locally derived empirical constraints following the IGIMF theory. See the GalIMF homepage https://sites.google.com/view/galimf/home for more information. (1 data file).
ERIC Educational Resources Information Center
Bolch, Matt
2009-01-01
Whether for an entire district, a single campus, or one classroom, allowing authorized access to a computer network can be fraught with challenges. The login process should be fairly seamless to approved users, giving them speedy access to approved Web sites, databases, and other sources of information. It also should be tough on unauthorized…
BUMPER: the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction
NASA Astrophysics Data System (ADS)
Holden, Phil; Birks, John; Brooks, Steve; Bush, Mark; Hwang, Grace; Matthews-Bird, Frazer; Valencia, Bryan; van Woesik, Robert
2017-04-01
We describe the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. The principal motivation for a Bayesian approach is that the palaeoenvironment is treated probabilistically, and can be updated as additional data become available. Bayesian approaches therefore provide a reconstruction-specific quantification of the uncertainty in the data and in the model parameters. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring 2 seconds to build a 100-taxon model from a 100-site training-set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training-sets under ideal assumptions. We then use these to demonstrate both the general applicability of the model and the sensitivity of reconstructions to the characteristics of the training-set, considering assemblage richness, taxon tolerances, and the number of training sites. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. In all of these applications an identically configured model is used, the only change being the input files that provide the training-set environment and taxon-count data.
Paul, Sharad P; Matulich, Justin; Charlton, Nick
2016-07-25
One of the problems in planning cutaneous surgery is that human skin is anisotropic, or directionally dependent. Indeed, skin tension varies between individuals and at different body sites. Many a surgeon has tried to design different devices to measure skin tension to help plan excisional surgery, or to understand wound healing. However, many of the devices have been beset with problems due to many confounding variables - differences in technical ability, material (sutures) used and variability between different users. We describe the development of a new skin tensiometer that overcomes many historical technical issues. A new skin tension measuring device is presented here. It was designed to be less user-dependent, more reliable and usable on different bodily sites. The design and computational optimizations are discussed. Our skin tensiometer has helped understand the differences between incisional and excisional skin lines. Langer, who pioneered the concept of skin tension lines, created incisional lines that differ from lines caused by forces that need to be overcome when large wounds are closed surgically (excisional tension). The use of this innovative device has led to understanding of skin biomechanics and best excisional skin tension (BEST) lines.
Paul, Sharad P.; Matulich, Justin; Charlton, Nick
2016-01-01
One of the problems in planning cutaneous surgery is that human skin is anisotropic, or directionally dependent. Indeed, skin tension varies between individuals and at different body sites. Many a surgeon has tried to design different devices to measure skin tension to help plan excisional surgery, or to understand wound healing. However, many of the devices have been beset with problems due to many confounding variables - differences in technical ability, material (sutures) used and variability between different users. We describe the development of a new skin tensiometer that overcomes many historical technical issues. A new skin tension measuring device is presented here. It was designed to be less user-dependent, more reliable and usable on different bodily sites. The design and computational optimizations are discussed. Our skin tensiometer has helped understand the differences between incisional and excisional skin lines. Langer, who pioneered the concept of skin tension lines, created incisional lines that differ from lines caused by forces that need to be overcome when large wounds are closed surgically (excisional tension). The use of this innovative device has led to understanding of skin biomechanics and best excisional skin tension (BEST) lines. PMID:27453542
Conjunctival impression cytology in computer users.
Kumar, S; Bansal, R; Khare, A; Malik, K P S; Malik, V K; Jain, K; Jain, C
2013-01-01
It is known that the computer users develop the features of dry eye. To study the cytological changes in the conjunctiva using conjunctival impression cytology in computer users and a control group. Fifteen eyes of computer users who had used computers for more than one year and ten eyes of an age-and-sex matched control group (those who had not used computers) were studied by conjunctival impression cytology. Conjunctival impression cytology (CIC) results in the control group were of stage 0 and stage I while the computer user group showed CIC results between stages II to stage IV. Among the computer users, the majority ( > 90 %) showed stage III and stage IV changes. We found that those who used computers daily for long hours developed more CIC changes than those who worked at the computer for a shorter daily duration. © NEPjOPH.
w4CSeq: software and web application to analyze 4C-seq data.
Cai, Mingyang; Gao, Fan; Lu, Wange; Wang, Kai
2016-11-01
Circularized Chromosome Conformation Capture followed by deep sequencing (4C-Seq) is a powerful technique to identify genome-wide partners interacting with a pre-specified genomic locus. Here, we present a computational and statistical approach to analyze 4C-Seq data generated from both enzyme digestion and sonication fragmentation-based methods. We implemented a command line software tool and a web interface called w4CSeq, which takes in the raw 4C sequencing data (FASTQ files) as input, performs automated statistical analysis and presents results in a user-friendly manner. Besides providing users with the list of candidate interacting sites/regions, w4CSeq generates figures showing genome-wide distribution of interacting regions, and sketches the enrichment of key features such as TSSs, TTSs, CpG sites and DNA replication timing around 4C sites. Users can establish their own web server by downloading source codes at https://github.com/WGLab/w4CSeq Additionally, a demo web server is available at http://w4cseq.wglab.org CONTACT: kaiwang@usc.edu or wangelu@usc.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
U.S. hydropower resource assessment for Idaho
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conner, A.M.; Francfort, J.E.
1998-08-01
The US Department of Energy is developing an estimate of the undeveloped hydropower potential in the US. The Hydropower Evaluation Software (HES) is a computer model that was developed by the Idaho National Engineering and Environmental Laboratory for this purpose. HES measures the undeveloped hydropower resources available in the US, using uniform criteria for measurement. The software was developed and tested using hydropower information and data provided by the Southwestern Power Administration. It is a menu-driven program that allows the personal computer user to assign environmental attributes to potential hydropower sites, calculate development suitability factors for each site based onmore » the environmental attributes present, and generate reports based on these suitability factors. This report describes the resource assessment results for the State of Idaho.« less
Computer and online health information literacy among Belgrade citizens aged 66-89 years.
Gazibara, Tatjana; Kurtagic, Ilma; Kisic-Tepavcevic, Darija; Nurkovic, Selmina; Kovacevic, Nikolina; Gazibara, Teodora; Pekmezovic, Tatjana
2016-06-01
Computer users over 65 years of age in Serbia are rare. The purpose of this study was to (i) describe main demographic characteristics of computer users older than 65; (ii) evaluate their online health information literacy and (iii) assess factors associated with computer use in this population. Persons above 65 years of age were recruited at the Community Health Center 'Vračar' in Belgrade from November 2012 to January 2013. Data were collected after medical checkups using a questionnaire. Of 480 persons who were invited to participate 354 (73.7%) agreed to participate, while 346 filled in the questionnaire (72.1%). A total of 70 (20.2%) older persons were computer users (23.4% males vs. 17.7% females). Of those, 23.7% explored health-related web sites. The majority of older persons who do not use computers reported that they do not have a reason to use a computer (76.5%), while every third senior (30.4%) did not own a computer. Predictors of computer use were being younger [odds ratio (OR) = 2.14, 95% confidence interval (CI) 1.30-4.04; p = 0.019], having less members of household (OR = 2.97, 95% CI 1.45-6.08; p = 0.003), being more educated (OR = 3.53, 95% CI 1.88-6.63; p = 0.001), having higher income (OR = 2.31, 95% CI 1.17-4.58; p = 0.016) as well as fewer comorbidities (OR = 0.42, 95% CI 0.23-0.79; p = 0.007). Being male was independent predictor of online health information use at the level of marginal significance (OR = 4.43, 95% CI 1.93-21.00; p = 0.061). Frequency of computer and Internet use among older adults in Belgrade is similar to other populations. Patterns of Internet use as well as non-use demonstrate particular socio-cultural characteristics. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal
2013-01-01
We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql -h database.nencki-genomics.org -u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface.
Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal
2013-01-01
We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql –h database.nencki-genomics.org –u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface. Database URL: http://www.nencki-genomics.org. PMID:24089456
Fang, Yu-Hua Dean; Asthana, Pravesh; Salinas, Cristian; Huang, Hsuan-Ming; Muzic, Raymond F
2010-01-01
An integrated software package, Compartment Model Kinetic Analysis Tool (COMKAT), is presented in this report. COMKAT is an open-source software package with many functions for incorporating pharmacokinetic analysis in molecular imaging research and has both command-line and graphical user interfaces. With COMKAT, users may load and display images, draw regions of interest, load input functions, select kinetic models from a predefined list, or create a novel model and perform parameter estimation, all without having to write any computer code. For image analysis, COMKAT image tool supports multiple image file formats, including the Digital Imaging and Communications in Medicine (DICOM) standard. Image contrast, zoom, reslicing, display color table, and frame summation can be adjusted in COMKAT image tool. It also displays and automatically registers images from 2 modalities. Parametric imaging capability is provided and can be combined with the distributed computing support to enhance computation speeds. For users without MATLAB licenses, a compiled, executable version of COMKAT is available, although it currently has only a subset of the full COMKAT capability. Both the compiled and the noncompiled versions of COMKAT are free for academic research use. Extensive documentation, examples, and COMKAT itself are available on its wiki-based Web site, http://comkat.case.edu. Users are encouraged to contribute, sharing their experience, examples, and extensions of COMKAT. With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.
Real Time Metrics and Analysis of Integrated Arrival, Departure, and Surface Operations
NASA Technical Reports Server (NTRS)
Sharma, Shivanjli; Fergus, John
2017-01-01
A real time dashboard was developed in order to inform and present users notifications and integrated information regarding airport surface operations. The dashboard is a supplement to capabilities and tools that incorporate arrival, departure, and surface air-traffic operations concepts in a NextGen environment. As trajectory-based departure scheduling and collaborative decision making tools are introduced in order to reduce delays and uncertainties in taxi and climb operations across the National Airspace System, users across a number of roles benefit from a real time system that enables common situational awareness. In addition to shared situational awareness the dashboard offers the ability to compute real time metrics and analysis to inform users about capacity, predictability, and efficiency of the system as a whole. This paper describes the architecture of the real time dashboard as well as an initial set of metrics computed on operational data. The potential impact of the real time dashboard is studied at the site identified for initial deployment and demonstration in 2017; Charlotte-Douglas International Airport. Analysis and metrics computed in real time illustrate the opportunity to provide common situational awareness and inform users of metrics across delay, throughput, taxi time, and airport capacity. In addition, common awareness of delays and the impact of takeoff and departure restrictions stemming from traffic flow management initiatives are explored. The potential of the real time tool to inform the predictability and efficiency of using a trajectory-based departure scheduling system is also discussed.
Intelligent user interface concept for space station
NASA Technical Reports Server (NTRS)
Comer, Edward; Donaldson, Cameron; Bailey, Elizabeth; Gilroy, Kathleen
1986-01-01
The space station computing system must interface with a wide variety of users, from highly skilled operations personnel to payload specialists from all over the world. The interface must accommodate a wide variety of operations from the space platform, ground control centers and from remote sites. As a result, there is a need for a robust, highly configurable and portable user interface that can accommodate the various space station missions. The concept of an intelligent user interface executive, written in Ada, that would support a number of advanced human interaction techniques, such as windowing, icons, color graphics, animation, and natural language processing is presented. The user interface would provide intelligent interaction by understanding the various user roles, the operations and mission, the current state of the environment and the current working context of the users. In addition, the intelligent user interface executive must be supported by a set of tools that would allow the executive to be easily configured and to allow rapid prototyping of proposed user dialogs. This capability would allow human engineering specialists acting in the role of dialog authors to define and validate various user scenarios. The set of tools required to support development of this intelligent human interface capability is discussed and the prototyping and validation efforts required for development of the Space Station's user interface are outlined.
ERIC Educational Resources Information Center
Paisley, William; Butler, Matilda
This study of the computer/user interface investigated the role of the computer in performing information tasks that users now perform without computer assistance. Users' perceptual/cognitive processes are to be accelerated or augmented by the computer; a long term goal is to delegate information tasks entirely to the computer. Cybernetic and…
Offspring Generation Method for interactive Genetic Algorithm considering Multimodal Preference
NASA Astrophysics Data System (ADS)
Ito, Fuyuko; Hiroyasu, Tomoyuki; Miki, Mitsunori; Yokouchi, Hisatake
In interactive genetic algorithms (iGAs), computer simulations prepare design candidates that are then evaluated by the user. Therefore, iGA can predict a user's preferences. Conventional iGA problems involve a search for a single optimum solution, and iGA were developed to find this single optimum. On the other hand, our target problems have several peaks in a function and there are small differences among these peaks. For such problems, it is better to show all the peaks to the user. Product recommendation in shopping sites on the web is one example of such problems. Several types of preference trend should be prepared for users in shopping sites. Exploitation and exploration are important mechanisms in GA search. To perform effective exploitation, the offspring generation method (crossover) is very important. Here, we introduced a new offspring generation method for iGA in multimodal problems. In the proposed method, individuals are clustered into subgroups and offspring are generated in each group. The proposed method was applied to an experimental iGA system to examine its effectiveness. In the experimental iGA system, users can decide on preferable t-shirts to buy. The results of the subjective experiment confirmed that the proposed method enables offspring generation with consideration of multimodal preferences, and the proposed mechanism was also shown not to adversely affect the performance of preference prediction.
Web services in the U.S. geological survey streamstats web application
Guthrie, J.D.; Dartiguenave, C.; Ries, Kernell G.
2009-01-01
StreamStats is a U.S. Geological Survey Web-based GIS application developed as a tool for waterresources planning and management, engineering design, and other applications. StreamStats' primary functionality allows users to obtain drainage-basin boundaries, basin characteristics, and streamflow statistics for gaged and ungaged sites. Recently, Web services have been developed that provide the capability to remote users and applications to access comprehensive GIS tools that are available in StreamStats, including delineating drainage-basin boundaries, computing basin characteristics, estimating streamflow statistics for user-selected locations, and determining point features that coincide with a National Hydrography Dataset (NHD) reach address. For the state of Kentucky, a web service also has been developed that provides users the ability to estimate daily time series of drainage-basin average values of daily precipitation and temperature. The use of web services allows the user to take full advantage of the datasets and processes behind the Stream Stats application without having to develop and maintain them. ?? 2009 IEEE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.
2008-10-08
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month formore » the current year and (2) site and fiscal year (FY) dating back to 1998. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period July 1 - September 30, 2008, for the fixed sites. The AMF has been deployed to China, but the data have not yet been released. The fourth quarter comprises a total of 2,208 hours. The average exceeded our goal this quarter. The Site Access Request System is a web-based database used to track visitors to the fixed and mobile sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP site has a central facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. The TWP locale has the Manus, Nauru, and Darwin sites. HFE represents the AMF statistics for the Shouxian, China, deployment in 2008. FKB represents the AMF statistics for the Haselbach, Germany, past deployment in 2007. NIM represents the AMF statistics for the Niamey, Niger, Africa, past deployment in 2006. PYE represents just the AMF Archive statistics for the Point Reyes, California, past deployment in 2005. In addition, users who do not want to wait for data to be provided through the ACRF Archive can request a research account on the local site data system. The seven computers for the research accounts are located at the Barrow and Atqasuk sites; the SGP central facility; the TWP Manus, Nauru, and Darwin sites; and the DMF at PNNL. In addition, the ACRF serves as a data repository for a long-term Arctic atmospheric observatory in Eureka, Canada (80 degrees 05 minutes N, 86 degrees 43 minutes W) as part of the multiagency Study of Environmental Arctic Change (SEARCH) Program. NOAA began providing instruments for the site in 2005, and currently cloud radar data are available. The intent of the site is to monitor the important components of the Arctic atmosphere, including clouds, aerosols, atmospheric radiation, and local-scale atmospheric dynamics. Because of the similarity of ACRF NSA data streams and the important synergy that can be formed between a network of Arctic atmospheric observations, much of the SEARCH observatory data are archived in the ARM archive. Instruments will be added to the site over time. For more information, please visit http://www.db.arm.gov/data. The designation for the archived Eureka data is YEU and is now included in the ACRF user metrics. This quarterly report provides the cumulative numbers of visitors and user accounts by site for the period October 1, 2007 - September 30, 2008. Table 2 shows the summary of cumulative users for the period October 1, 2007 - September 30, 2008. For the fourth quarter of FY 2008, the overall number of users is down substantially (about 30%) from last quarter. Most of this decrease resulted from a reduction in the ACRF Infrastructure users (e.g., site visits, research accounts, on-site device accounts, etc.) associated with the AMF China deployment. While users had easy access to the previous AMF deployment in Germany that resulted in all-time high user statistics, physical and remote access to on-site accounts are extremely limited for the AMF deployment in China. Furthermore, AMF data have not yet been released from China to the Data Management Facility for processing, which affects Archive user statistics. However, Archive users are only down about 10% from last quarter. Another reason for the apparent reduction in Archive users is that data from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), a major field campaign conducted on the North Slope of Alaska, are not yet available to users. For reporting purposes, the three ACRF sites and the AMF operate 24 hours per day, 7 days per week, and 52 weeks per year. Time is reported in days instead of hours. If any lost work time is incurred by any employee, it is counted as a workday loss. Table 3 reports the consecutive days since the last recordable or reportable injury or incident causing damage to property, equipment, or vehicle for the period July 1 - September 30, 2008. There were no incidents this reporting period.« less
Sanders, G D; Nease, R F; Owens, D K
2000-01-01
Local tailoring of clinical practice guidelines (CPGs) requires experts in medicine and evidence synthesis unavailable in many practice settings. The authors' computer-based system enables developers and users to create, disseminate, and tailor CPGs, using normative decision models (DMs). ALCHEMIST, a web-based system, analyzes a DM, creates a CPG in the form of an annotated algorithm, and displays for the guideline user the optimal strategy. ALCHEMIST'S interface enables remote users to tailor the guideline by changing underlying input variables and observing the new annotated algorithm that is developed automatically. In a pilot evaluation of the system, a DM was used to evaluate strategies for staging non-small-cell lung cancer. Subjects (n = 15) compared the automatically created CPG with published guidelines for this staging and critiqued both using a previously developed instrument to rate the CPGs' usability, accountability, and accuracy on a scale of 0 (worst) to 2 (best), with higher scores reflecting higher quality. The mean overall score for the ALCHEMIST CPG was 1.502, compared with the published-CPG score of 0.987 (p = 0.002). The ALCHEMIST CPG scores for usability, accountability, and accuracy were 1.683, 1.393, and 1.430, respectively; the published CPG scores were 1.192, 0.941, and 0.830 (each comparison p < 0.05). On a scale of 1 (worst) to 5 (best), users' mean ratings of ALCHEMIST'S ease of use, usefulness of content, and presentation format were 4.76, 3.98, and 4.64, respectively. The results demonstrate the feasibility of a web-based system that automatically analyzes a DM and creates a CPG as an annotated algorithm, enabling remote users to develop site-specific CPGs. In the pilot evaluation, the ALCHEMIST guidelines met established criteria for quality and compared favorably with national CPGs. The high usability and usefulness ratings suggest that such systems can be a good tool for guideline development.
Coalescent: an open-source and scalable framework for exact calculations in coalescent theory
2012-01-01
Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach. PMID:23033878
ABSENTEE COMPUTATIONS IN A MULTIPLE-ACCESS COMPUTER SYSTEM.
require user interaction, and the user may therefore want to run these computations ’ absentee ’ (or, user not present). A mechanism is presented which...provides for the handling of absentee computations in a multiple-access computer system. The design is intended to be implementation-independent...Some novel features of the system’s design are: a user can switch computations from interactive to absentee (and vice versa), the system can
Ye, Nong; Li, Xiangyang; Farley, Toni
2003-01-15
Hand signs are considered as one of the important ways to enter information into computers for certain tasks. Computers receive sensor data of hand signs for recognition. When using hand signs as computer inputs, we need to (1) train computer users in the sign language so that their hand signs can be easily recognized by computers, and (2) design the computer interface to avoid the use of confusing signs for improving user input performance and user satisfaction. For user training and computer interface design, it is important to have a knowledge of which signs can be easily recognized by computers and which signs are not distinguishable by computers. This paper presents a data mining technique to discover distinct patterns of hand signs from sensor data. Based on these patterns, we derive a group of indistinguishable signs by computers. Such information can in turn assist in user training and computer interface design.
Social Networking Adapted for Distributed Scientific Collaboration
NASA Technical Reports Server (NTRS)
Karimabadi, Homa
2012-01-01
Share is a social networking site with novel, specially designed feature sets to enable simultaneous remote collaboration and sharing of large data sets among scientists. The site will include not only the standard features found on popular consumer-oriented social networking sites such as Facebook and Myspace, but also a number of powerful tools to extend its functionality to a science collaboration site. A Virtual Observatory is a promising technology for making data accessible from various missions and instruments through a Web browser. Sci-Share augments services provided by Virtual Observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase science returns from NASA missions. Sci-Share also enables better utilization of NASA s high-performance computing resources by providing an easy and central mechanism to access and share large files on users space or those saved on mass storage. The most common means of remote scientific collaboration today remains the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. Each of these tools has well-known limitations. Sci-Share transforms the social networking paradigm into a scientific collaboration environment by offering powerful tools for cooperative discourse and digital content sharing. Sci-Share differentiates itself by serving as an online repository for users digital content with the following unique features: a) Sharing of any file type, any size, from anywhere; b) Creation of projects and groups for controlled sharing; c) Module for sharing files on HPC (High Performance Computing) sites; d) Universal accessibility of staged files as embedded links on other sites (e.g. Facebook) and tools (e.g. e-mail); e) Drag-and-drop transfer of large files, replacing awkward e-mail attachments (and file size limitations); f) Enterprise-level data and messaging encryption; and g) Easy-to-use intuitive workflow.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
...; (Formerly FDA-2007D-0393)] Guidance for Industry: Blood Establishment Computer System Validation in the User... Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April 2013. The... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's...
75 FR 75170 - APHIS User Fee Web Site
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-02
...] APHIS User Fee Web Site AGENCY: Animal and Plant Health Inspection Service, USDA. ACTION: Notice... recover the costs of providing certain services. This notice announces the availability of a Web site that contains information about the Agency's user fees. ADDRESSES: The Agency's user fee Web site is located at...
Usability and Accessibility of eBay by Screen Reader
NASA Astrophysics Data System (ADS)
Buzzi, Maria Claudia; Buzzi, Marina; Leporini, Barbara; Akhter, Fahim
The evolution of Information and Communication Technology and the rapid growth of the Internet have fuelled a great diffusion of eCommerce websites. Usually these sites have complex layouts crowded with active elements, and thus are difficult to navigate via screen reader. Interactive environments should be properly designed and delivered to everyone, including the blind, who usually use screen readers to interact with their computers. In this paper we investigate the interaction of blind users with eBay, a popular eCommerce website, and discuss how using the W3C Accessible Rich Internet Applications (WAI-ARIA) suite could improve the user experience when navigating via screen reader.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faculjak, D.A.
1988-03-01
Graphics Manager (GFXMGR) is menu-driven, user-friendly software designed to interactively create, edit, and delete graphics displays on the Advanced Electronics Design (AED) graphics controller, Model 767. The software runs on the VAX family of computers and has been used successfully in security applications to create and change site layouts (maps) of specific facilities. GFXMGR greatly benefits graphics development by minimizing display-development time, reducing tedium on the part of the user, and improving system performance. It is anticipated that GFXMGR can be used to create graphics displays for many types of applications. 8 figs., 2 tabs.
Prototyping the graphical user interface for the operator of the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Sadeh, I.; Oya, I.; Schwarz, J.; Pietriga, E.
2016-07-01
The Cherenkov Telescope Array (CTA) is a planned gamma-ray observatory. CTA will incorporate about 100 imaging atmospheric Cherenkov telescopes (IACTs) at a Southern site, and about 20 in the North. Previous IACT experiments have used up to five telescopes. Subsequently, the design of a graphical user interface (GUI) for the operator of CTA involves new challenges. We present a GUI prototype, the concept for which is being developed in collaboration with experts from the field of Human-Computer Interaction (HCI). The prototype is based on Web technology; it incorporates a Python web server, Web Sockets and graphics generated with the d3.js Javascript library.
1986-10-01
34 syndrome . Five techniques for developing user self-sufficiency were discussed in-depth: (1) provide training at different proficiency levels, id provide...WY aba. 4 ’-a ’aba. @9 "a,.. I bJ~ V N- S ha. E-ii S ’a ’baa .4, .;y ’. ; . a.,. ~ ~ .: ;r YK’.- xYY I. r d’...da.4/ 4/ tq..ttf.%r./..v-./ - "cc~’ v,-r...data base, accessible through a user friendly interfaces, to correlate and deliver crime , traffic, vehicle, and personnel information to " / fixed and
'Surfing the Silk Road': a study of users' experiences.
Van Hout, Marie Claire; Bingham, Tim
2013-11-01
The online drug marketplace called 'Silk Road' has operated anonymously on the 'Deep Web' since 2011. It is accessible through computer encrypting software (Tor) and is supported by online transactions using peer to peer anonymous and untraceable crypto-currency (Bit Coins). The study aimed to describe user motives and realities of accessing, navigating and purchasing on the 'Silk Road' marketplace. Systematic online observations, monitoring of discussion threads on the site during four months of fieldwork and analysis of anonymous online interviews (n=20) with a convenience sample of adult 'Silk Road' users was conducted. The majority of participants were male, in professional employment or in tertiary education. Drug trajectories ranged from 18 months to 25 years, with favourite drugs including MDMA, 2C-B, mephedrone, nitrous oxide, ketamine, cannabis and cocaine. Few reported prior experience of online drug sourcing. Reasons for utilizing 'Silk Road' included curiosity, concerns for street drug quality and personal safety, variety of products, anonymous transactioning, and ease of product delivery. Vendor selection appeared to be based on trust, speed of transaction, stealth modes and quality of product. Forums on the site provided user advice, trip reports, product and transaction reviews. Some users reported solitary drug use for psychonautic and introspective purposes. A minority reported customs seizures, and in general a displacement away from traditional drug sourcing (street and closed markets) was described. Several reported intentions to commence vending on the site. The study provides an insight into 'Silk Road' purchasing motives and processes, the interplay between traditional and 'Silk Road' drug markets, the 'Silk Road' online community and its communication networks. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian
2011-06-01
The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.
Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian
2011-06-01
The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.
Framework for experimenting with QoS for multimedia services
NASA Astrophysics Data System (ADS)
Chen, Deming; Colwell, Regis; Gelman, Herschel; Chrysanthis, Panos K.; Mosse, Daniel
1996-03-01
It has been recognized that an effective support for multimedia applications must provide Quality of Service (QoS) guarantees. Current methods propose to provide such QoS guarantees through coordinated network resource reservations. In our approach, we extend this idea providing system-wide QoS guarantees that consider the data manipulation and transformations needed in the intermediate and end sites of the network. Given a user's QoS requirements, multisegment virtual channels are established with the necessary communication and computation resources reserved for the timely, synchronized, and reliable delivery of the different datatypes. Such data originate in several distributed data repositories, are transformed at intermediate service stations into suitable formats for transportation and presentation, and are delivered to a viewing unit. In this paper, we first review NETWORLD, an architecture that provides such QoS guarantees and an interface for the specification and negotiation of user-level QoS requirements. Our user interface supports both expert and non- expert modes. We then describe how to map user-level QoS requirements into low-level system parameters, leading into a contract between the application and the network. The mapping considers various characteristics of the architectures (such as the hardware and software available at each source, destination, or intermediate site) as well as cost constraints.
Computer Directed Training System (CDTS), User’s Manual
1983-07-01
lessons, together with an estimate of the time required for completion. a. BSCOl0. This lesson in BASIC ( Beginners All Purpose Symbolic Instruction Code...A2-8 FIGURESj Figure A2-1. Training Systems Manager and Training Monitors Responsibility Flowchart ...training at the site. Therefore, the TSM must be knowledgeable in the various tasks required. Figure A2-1 illustrates the position in the flowchart . These
ERIC Educational Resources Information Center
Huang, T. K.
2018-01-01
The study makes use of the photo-hosting site, namely Flickr, for students to upload screenshots to demonstrate computer software problems and troubleshooting software. By creating non-text stickers and text-based annotations above the screenshots, students are able to help one another to diagnose and solve problems with greater certainty. In…
Maintaining Traceability in an Evolving Distributed Computing Environment
NASA Astrophysics Data System (ADS)
Collier, I.; Wartel, R.
2015-12-01
The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their logging infrastructures. VOs indeed need to fulfil a new operational role and become fully active participants in the incident response process. We present an analysis of the changing requirements to maintain traceability for virtualised and cloud based workflows with particular reference to the work of the WLCG Traceability Working Group.
Usability Evaluation of Public Web Mapping Sites
NASA Astrophysics Data System (ADS)
Wang, C.
2014-04-01
Web mapping sites are interactive maps that are accessed via Webpages. With the rapid development of Internet and Geographic Information System (GIS) field, public web mapping sites are not foreign to people. Nowadays, people use these web mapping sites for various reasons, in that increasing maps and related map services of web mapping sites are freely available for end users. Thus, increased users of web mapping sites led to more usability studies. Usability Engineering (UE), for instance, is an approach for analyzing and improving the usability of websites through examining and evaluating an interface. In this research, UE method was employed to explore usability problems of four public web mapping sites, analyze the problems quantitatively and provide guidelines for future design based on the test results. Firstly, the development progress for usability studies were described, and simultaneously several usability evaluation methods such as Usability Engineering (UE), User-Centered Design (UCD) and Human-Computer Interaction (HCI) were generally introduced. Then the method and procedure of experiments for the usability test were presented in detail. In this usability evaluation experiment, four public web mapping sites (Google Maps, Bing maps, Mapquest, Yahoo Maps) were chosen as the testing websites. And 42 people, who having different GIS skills (test users or experts), gender (male or female), age and nationality, participated in this test to complete the several test tasks in different teams. The test comprised three parts: a pretest background information questionnaire, several test tasks for quantitative statistics and progress analysis, and a posttest questionnaire. The pretest and posttest questionnaires focused on gaining the verbal explanation of their actions qualitatively. And the design for test tasks targeted at gathering quantitative data for the errors and problems of the websites. Then, the results mainly from the test part were analyzed. The success rate from different public web mapping sites was calculated and compared, and displayed by the means of diagram. And the answers from questionnaires were also classified and organized in this part. Moreover, based on the analysis, this paper expands the discussion about the layout, map visualization, map tools, search logic and etc. Finally, this paper closed with some valuable guidelines and suggestions for the design of public web mapping sites. Also, limitations for this research stated in the end.
DAMBE7: New and Improved Tools for Data Analysis in Molecular Biology and Evolution.
Xia, Xuhua
2018-06-01
DAMBE is a comprehensive software package for genomic and phylogenetic data analysis on Windows, Linux, and Macintosh computers. New functions include imputing missing distances and phylogeny simultaneously (paving the way to build large phage and transposon trees), new bootstrapping/jackknifing methods for PhyPA (phylogenetics from pairwise alignments), and an improved function for fast and accurate estimation of the shape parameter of the gamma distribution for fitting rate heterogeneity over sites. Previous method corrects multiple hits for each site independently. DAMBE's new method uses all sites simultaneously for correction. DAMBE, featuring a user-friendly graphic interface, is freely available from http://dambe.bio.uottawa.ca (last accessed, April 17, 2018).
Web-Based Family Life Education: Spotlight on User Experience
ERIC Educational Resources Information Center
Doty, Jennifer; Doty, Matthew; Dwrokin, Jodi
2011-01-01
Family Life Education (FLE) websites can benefit from the field of user experience, which makes technology easy to use. A heuristic evaluation of five FLE sites was performed using Neilson's heuristics, guidelines for making sites user friendly. Greater site complexity resulted in more potential user problems. Sites most frequently had problems…
Automated Euler and Navier-Stokes Database Generation for a Glide-Back Booster
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Rogers, Stuart E.; Aftosmis, Mike J.; Pandya, Shishir A.; Ahmad, Jasim U.; Tejnil, Edward
2004-01-01
The past two decades have seen a sustained increase in the use of high fidelity Computational Fluid Dynamics (CFD) in basic research, aircraft design, and the analysis of post-design issues. As the fidelity of a CFD method increases, the number of cases that can be readily and affordably computed greatly diminishes. However, computer speeds now exceed 2 GHz, hundreds of processors are currently available and more affordable, and advances in parallel CFD algorithms scale more readily with large numbers of processors. All of these factors make it feasible to compute thousands of high fidelity cases. However, there still remains the overwhelming task of monitoring the solution process. This paper presents an approach to automate the CFD solution process. A new software tool, AeroDB, is used to compute thousands of Euler and Navier-Stokes solutions for a 2nd generation glide-back booster in one week. The solution process exploits a common job-submission grid environment, the NASA Information Power Grid (IPG), using 13 computers located at 4 different geographical sites. Process automation and web-based access to a MySql database greatly reduces the user workload, removing much of the tedium and tendency for user input errors. The AeroDB framework is shown. The user submits/deletes jobs, monitors AeroDB's progress, and retrieves data and plots via a web portal. Once a job is in the database, a job launcher uses an IPG resource broker to decide which computers are best suited to run the job. Job/code requirements, the number of CPUs free on a remote system, and queue lengths are some of the parameters the broker takes into account. The Globus software provides secure services for user authentication, remote shell execution, and secure file transfers over an open network. AeroDB automatically decides when a job is completed. Currently, the Cart3D unstructured flow solver is used for the Euler equations, and the Overflow structured overset flow solver is used for the Navier-Stokes equations. Other codes can be readily included into the AeroDB framework.
DMSP SSJ4 Data Restoration, Classification, and On-Line Data Access
NASA Technical Reports Server (NTRS)
Wing, Simon; Bredekamp, Joseph H. (Technical Monitor)
2000-01-01
Compress and clean raw data file for permanent storage We have identified various error conditions/types and developed algorithms to get rid of these errors/noises, including the more complicated noise in the newer data sets. (status = 100% complete). Internet access of compacted raw data. It is now possible to access the raw data via our web site, http://www.jhuapl.edu/Aurora/index.html. The software to read and plot the compacted raw data is also available from the same web site. The users can now download the raw data, read, plot, or manipulate the data as they wish on their own computer. The users are able to access the cleaned data sets. Internet access of the color spectrograms. This task has also been completed. It is now possible to access the spectrograms from the web site mentioned above. Improve the particle precipitation region classification. The algorithm for doing this task has been developed and implemented. As a result, the accuracies improved. Now the web site routinely distributes the results of applying the new algorithm to the cleaned data set. Mark the classification region on the spectrograms. The software to mark the classification region in the spectrograms has been completed. This is also available from our web site.
Distributed Computing for the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Chudoba, J.
2015-12-01
Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.
Intrusion Prevention and Detection in Grid Computing - The ALICE Case
NASA Astrophysics Data System (ADS)
Gomez, Andres; Lara, Camilo; Kebschull, Udo
2015-12-01
Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.
Computer-based physician order entry: the state of the art.
Sittig, D F; Stead, W W
1994-01-01
Direct computer-based physician order entry has been the subject of debate for over 20 years. Many sites have implemented systems successfully. Others have failed outright or flirted with disaster, incurring substantial delays, cost overruns, and threatened work actions. The rationale for physician order entry includes process improvement, support of cost-conscious decision making, clinical decision support, and optimization of physicians' time. Barriers to physician order entry result from the changes required in practice patterns, roles within the care team, teaching patterns, and institutional policies. Key ingredients for successful implementation include: the system must be fast and easy to use, the user interface must behave consistently in all situations, the institution must have broad and committed involvement and direction by clinicians prior to implementation, the top leadership of the organization must be committed to the project, and a group of problem solvers and users must meet regularly to work out procedural issues. This article reviews the peer-reviewed scientific literature to present the current state of the art of computer-based physician order entry. PMID:7719793
EngineSim: Turbojet Engine Simulator Adapted for High School Classroom Use
NASA Technical Reports Server (NTRS)
Petersen, Ruth A.
2001-01-01
EngineSim is an interactive educational computer program that allows users to explore the effect of engine operation on total aircraft performance. The software is supported by a basic propulsion web site called the Beginner's Guide to Propulsion, which includes educator-created, web-based activities for the classroom use of EngineSim. In addition, educators can schedule videoconferencing workshops in which EngineSim's creator demonstrates the software and discusses its use in the educational setting. This software is a product of NASA Glenn Research Center's Learning Technologies Project, an educational outreach initiative within the High Performance Computing and Communications Program.
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Gutsche, O.
The Worldwide LHC Computing Grid (WLCG) project decided in March 2009 to perform scale tests of parts of its overall Grid infrastructure before the start of the LHC data taking. The "Scale Test for the Experiment Program" (STEP'09) was performed mainly in June 2009 -with more selected tests in September- October 2009 -and emphasized the simultaneous test of the computing systems of all 4 LHC experiments. CMS tested its Tier-0 tape writing and processing capabilities. The Tier-1 tape systems were stress tested using the complete range of Tier-1 work-flows: transfer from Tier-0 and custody of data on tape, processing and subsequent archival, redistribution of datasets amongst all Tier-1 sites as well as burst transfers of datasets to Tier-2 sites. The Tier-2 analysis capacity was tested using bulk analysis job submissions to backfill normal user activity. In this talk, we will report on the different performed tests and present their post-mortem analysis.
Satellite freeze forecast system: Executive summary
NASA Technical Reports Server (NTRS)
Martsolf, J. D. (Principal Investigator)
1983-01-01
A satellite-based temperature monitoring and prediction system consisting of a computer controlled acquisition, processing, and display system and the ten automated weather stations called by that computer was developed and transferred to the national weather service. This satellite freeze forecasting system (SFFS) acquires satellite data from either one of two sources, surface data from 10 sites, displays the observed data in the form of color-coded thermal maps and in tables of automated weather station temperatures, computes predicted thermal maps when requested and displays such maps either automatically or manually, archives the data acquired, and makes comparisons with historical data. Except for the last function, SFFS handles these tasks in a highly automated fashion if the user so directs. The predicted thermal maps are the result of two models, one a physical energy budget of the soil and atmosphere interface and the other a statistical relationship between the sites at which the physical model predicts temperatures and each of the pixels of the satellite thermal map.
NASA Astrophysics Data System (ADS)
Lawden, M. D.; Mellor, G. R.; Smalley, B.
This is a brief introduction to the Unix operating system as it is used in the Starlink Project. It is aimed at users who are new to Unix and to Starlink. Its purpose is to get you started quickly, so it is simple rather than comprehensive. It should be read in conjunction with any local guides provided at your local Starlink Site -- these give details of the local facilities available and how to use them. This is important because Starlink Sites differ in their computer hardware and in their versions of Unix. A ``Quick Reference Card" is available which summarises the contents of this note.
NASA Astrophysics Data System (ADS)
Pordes, Ruth; OSG Consortium; Petravick, Don; Kramer, Bill; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Würthwein, Frank; Foster, Ian; Gardner, Rob; Wilde, Mike; Blatecky, Alan; McGee, John; Quick, Rob
2007-07-01
The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support it's use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.
Computer generated maps from digital satellite data - A case study in Florida
NASA Technical Reports Server (NTRS)
Arvanitis, L. G.; Reich, R. M.; Newburne, R.
1981-01-01
Ground cover maps are important tools to a wide array of users. Over the past three decades, much progress has been made in supplementing planimetric and topographic maps with ground cover details obtained from aerial photographs. The present investigation evaluates the feasibility of using computer maps of ground cover from satellite input tapes. Attention is given to the selection of test sites, a satellite data processing system, a multispectral image analyzer, general purpose computer-generated maps, the preliminary evaluation of computer maps, a test for areal correspondence, the preparation of overlays and acreage estimation of land cover types on the Landsat computer maps. There is every indication to suggest that digital multispectral image processing systems based on Landsat input data will play an increasingly important role in pattern recognition and mapping land cover in the years to come.
75 FR 75962 - Proposed Information Collection; Comment Request; Commerce.Gov Web Site User Survey
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-07
...; Commerce.Gov Web Site User Survey AGENCY: Office of the Secretary, Office of Public Affairs. ACTION: Notice... serve users of Commerce.gov and the Department of Commerce bureaus' Web sites, the Offices of Public Affairs will collect information from users about their experience on the Web sites. A random number of...
Visualization and Quality Control Web Tools for CERES Products
NASA Astrophysics Data System (ADS)
Mitrescu, C.; Doelling, D. R.; Rutan, D. A.
2016-12-01
The CERES project continues to provide the scientific community a wide variety of satellite-derived data products such as observed TOA broadband shortwave and longwave observed fluxes, computed TOA and Surface fluxes, as well as cloud, aerosol, and other atmospheric parameters. They encompass a wide range of temporal and spatial resolutions, suited to specific applications. Now in its 16-year, CERES products are mostly used by climate modeling communities that focus on global mean energetics, meridianal heat transport, and climate trend studies. In order to serve all our users, we developed a web-based Ordering and Visualization Tool (OVT). Using Opens Source Software such as Eclipse, java, javascript, OpenLayer, Flot, Google Maps, python, and others, the OVT Team developed a series of specialized functions to be used in the process of CERES Data Quality Control (QC). We mention 1- and 2-D histogram, anomaly, deseasonalization, temporal and spatial averaging, side-by-side parameter comparison, and others that made the process of QC far easier and faster, but more importantly far more portable. We are now in the process of integrating ground site observed surface fluxes to further facilitate the CERES project to QC the CERES computed surface fluxes. These features will give users the opportunity to perform their own comparisons of the CERES computed surface fluxes and observed ground site fluxes. An overview of the CERES OVT basic functions using Open Source Software, as well as future steps in expanding its capabilities will be presented at the meeting.
Telescience Support Center Data System Software
NASA Technical Reports Server (NTRS)
Rahman, Hasan
2010-01-01
The Telescience Support Center (TSC) team has developed a databasedriven, increment-specific Data Require - ment Document (DRD) generation tool that automates much of the work required for generating and formatting the DRD. It creates a database to load the required changes to configure the TSC data system, thus eliminating a substantial amount of labor in database entry and formatting. The TSC database contains the TSC systems configuration, along with the experimental data, in which human physiological data must be de-commutated in real time. The data for each experiment also must be cataloged and archived for future retrieval. TSC software provides tools and resources for ground operation and data distribution to remote users consisting of PIs (principal investigators), bio-medical engineers, scientists, engineers, payload specialists, and computer scientists. Operations support is provided for computer systems access, detailed networking, and mathematical and computational problems of the International Space Station telemetry data. User training is provided for on-site staff and biomedical researchers and other remote personnel in the usage of the space-bound services via the Internet, which enables significant resource savings for the physical facility along with the time savings versus traveling to NASA sites. The software used in support of the TSC could easily be adapted to other Control Center applications. This would include not only other NASA payload monitoring facilities, but also other types of control activities, such as monitoring and control of the electric grid, chemical, or nuclear plant processes, air traffic control, and the like.
Exploring Factors That Affect Adoption of Computer Security Practices among College Students
ERIC Educational Resources Information Center
Alqarni, Amani
2017-01-01
Cyber-attacks threaten the security of computer users' information, networks, machines, and privacy. Studies of computer security education, awareness, and training among ordinary computer users, college students, non-IT-oriented user groups, and non-technically trained citizens are limited. Most research has focused on computer security standards…
Motivating Contributions for Home Computer Security
ERIC Educational Resources Information Center
Wash, Richard L.
2009-01-01
Recently, malicious computer users have been compromising computers en masse and combining them to form coordinated botnets. The rise of botnets has brought the problem of home computers to the forefront of security. Home computer users commonly have insecure systems; these users do not have the knowledge, experience, and skills necessary to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.
2008-05-22
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year (FY) dating back to 1998. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period January 1 - March 31, 2008, for the fixed sites. The AMF is being deployed to China and is not in operation this quarter. The second quarter comprises a total of 2,184 hours. The average as well as the individual site values exceeded our goal this quarter. The Site Access Request System is a web-based database used to track visitors to the fixed and mobile sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP site has a central facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. The TWP locale has the Manus, Nauru, and Darwin sites. FKB represents the AMF statistics for the Haselbach, Germany, past deployment in 2007. NIM represents the AMF statistics for the Niamey, Niger, Africa, past deployment in 2006. PYE represents just the AMF Archive statistics for the Point Reyes, California, past deployment in 2005. In addition, users who do not want to wait for data to be provided through the ACRF Archive can request a research account on the local site data system. The seven computers for the research accounts are located at the Barrow and Atqasuk sites; the SGP central facility; the TWP Manus, Nauru, and Darwin sites; and the DMF at PNNL. In addition, the ACRF serves as a data repository for a long-term Arctic atmospheric observatory in Eureka, Canada (80 degrees 05 minutes N, 86 degrees 43 minutes W) as part of the multiagency Study of Environmental Arctic Change (SEARCH) Program. NOAA began providing instruments for the site in 2005, and currently cloud radar data are available. The intent of the site is to monitor the important components of the Arctic atmosphere, including clouds, aerosols, atmospheric radiation, and local-scale atmospheric dynamics. Because of the similarity of ACRF NSA data streams and the important synergy that can be formed between a network of Arctic atmospheric observations, much of the SEARCH observatory data are archived in the ARM archive. Instruments will be added to the site over time. For more information, please visit http://www.db.arm.gov/data. The designation for the archived Eureka data is YEU and is now included in the ACRF user metrics. This quarterly report provides the cumulative numbers of visitors and user accounts by site for the period April 1, 2007 - March 31, 2008. Table 2 shows the summary of cumulative users for the period April 1, 2007 - March 31, 2007. For the second quarter of FY 2008, the overall number of users was nearly as high as the last reporting period, in which a new record high for number of users was established. This quarter, a new record high was established for the number of user days, particularly due to the large number of field campaign activities in conjunction with the AMF deployment in Germany, as well as major field campaigns at the NSA and SGP sites. This quarter, 37% of the Archive users are ARM science-funded principal investigators and 23% of all other facility users are either ARM science-funded principal investigators or ACRF infrastructure personnel. For reporting purposes, the three ACRF sites and the AMF operate 24 hours per day, 7 days per week, and 52 weeks per year. Time is reported in days instead of hours. If any lost work time is incurred by any employee, it is counted as a workday loss. Table 3 reports the consecutive days since the last recordable or reportable injury or incident causing damage to property, equipment, or vehicle for the period January 1 - March 31, 2008. There were no incidents this reporting period.« less
Setting Up the JBrowse Genome Browser
Skinner, Mitchell E; Holmes, Ian H
2010-01-01
JBrowse is a web-based tool for visualizing genomic data. Unlike most other web-based genome browsers, JBrowse exploits the capabilities of the user's web browser to make scrolling and zooming fast and smooth. It supports the browsers used by almost all internet users, and is relatively simple to install. JBrowse can utilize multiple types of data in a variety of common genomic data formats, including genomic feature data in bioperl databases, GFF files, and BED files, and quantitative data in wiggle files. This unit describes how to obtain the JBrowse software, set it up on a Linux or Mac OS X computer running as a web server and incorporate genome annotation data from multiple sources into JBrowse. After completing the protocols described in this unit, the reader will have a web site that other users can visit to browse the genomic data. PMID:21154710
ERIC Educational Resources Information Center
Smith, Peter, Ed.
Topics addressed by the papers including in this proceedings include: multimedia in the classroom; World Wide Web site development; the evolution of academic library services; a Web-based literature course; development of a real-time intelligent network environment; serving grades over the Internet; e-mail over a Web browser; using technology to…
Standardized description of scientific evidence using the Evidence Ontology (ECO)
Chibucos, Marcus C.; Mungall, Christopher J.; Balakrishnan, Rama; Christie, Karen R.; Huntley, Rachael P.; White, Owen; Blake, Judith A.; Lewis, Suzanna E.; Giglio, Michelle
2014-01-01
The Evidence Ontology (ECO) is a structured, controlled vocabulary for capturing evidence in biological research. ECO includes diverse terms for categorizing evidence that supports annotation assertions including experimental types, computational methods, author statements and curator inferences. Using ECO, annotation assertions can be distinguished according to the evidence they are based on such as those made by curators versus those automatically computed or those made via high-throughput data review versus single test experiments. Originally created for capturing evidence associated with Gene Ontology annotations, ECO is now used in other capacities by many additional annotation resources including UniProt, Mouse Genome Informatics, Saccharomyces Genome Database, PomBase, the Protein Information Resource and others. Information on the development and use of ECO can be found at http://evidenceontology.org. The ontology is freely available under Creative Commons license (CC BY-SA 3.0), and can be downloaded in both Open Biological Ontologies and Web Ontology Language formats at http://code.google.com/p/evidenceontology. Also at this site is a tracker for user submission of term requests and questions. ECO remains under active development in response to user-requested terms and in collaborations with other ontologies and database resources. Database URL: Evidence Ontology Web site: http://evidenceontology.org PMID:25052702
MED31/437: A Web-based Diabetes Management System: DiabNet
Zhao, N; Roudsari, A; Carson, E
1999-01-01
Introduction A web-based system (DiabNet) was developed to provide instant access to the Electronic Diabetes Records (EDR) for end-users, and real-time information for healthcare professionals to facilitate their decision-making. It integrates portable glucometer, handheld computer, mobile phone and Internet access as a combined telecommunication and mobile computing solution for diabetes management. Methods: Active Server Pages (ASP) embedded with advanced ActiveX controls and VBScript were developed to allow remote data upload, retrieval and interpretation. Some advisory and Internet-based learning features, together with a video teleconferencing component make DiabNet web site an informative platform for Web-consultation. Results The evaluation of the system is being implemented among several UK Internet diabetes discussion groups and the Diabetes Day Centre at the Guy's & St. Thomas' Hospital. Many positive feedback are received from the web site demonstrating DiabNet is an advanced web-based diabetes management system which can help patients to keep closer control of self-monitoring blood glucose remotely, and is an integrated diabetes information resource that offers telemedicine knowledge in diabetes management. Discussion In summary, DiabNet introduces an innovative online diabetes management concept, such as online appointment and consultation, to enable users to access diabetes management information without time and location limitation and security concerns.
RNA-Rocket: an RNA-Seq analysis resource for infectious disease research
Warren, Andrew S.; Aurrecoechea, Cristina; Brunk, Brian; Desai, Prerak; Emrich, Scott; Giraldo-Calderón, Gloria I.; Harb, Omar; Hix, Deborah; Lawson, Daniel; Machi, Dustin; Mao, Chunhong; McClelland, Michael; Nordberg, Eric; Shukla, Maulik; Vosshall, Leslie B.; Wattam, Alice R.; Will, Rebecca; Yoo, Hyun Seung; Sobral, Bruno
2015-01-01
Motivation: RNA-Seq is a method for profiling transcription using high-throughput sequencing and is an important component of many research projects that wish to study transcript isoforms, condition specific expression and transcriptional structure. The methods, tools and technologies used to perform RNA-Seq analysis continue to change, creating a bioinformatics challenge for researchers who wish to exploit these data. Resources that bring together genomic data, analysis tools, educational material and computational infrastructure can minimize the overhead required of life science researchers. Results: RNA-Rocket is a free service that provides access to RNA-Seq and ChIP-Seq analysis tools for studying infectious diseases. The site makes available thousands of pre-indexed genomes, their annotations and the ability to stream results to the bioinformatics resources VectorBase, EuPathDB and PATRIC. The site also provides a combination of experimental data and metadata, examples of pre-computed analysis, step-by-step guides and a user interface designed to enable both novice and experienced users of RNA-Seq data. Availability and implementation: RNA-Rocket is available at rnaseq.pathogenportal.org. Source code for this project can be found at github.com/cidvbi/PathogenPortal. Contact: anwarren@vt.edu Supplementary information: Supplementary materials are available at Bioinformatics online. PMID:25573919
RNA-Rocket: an RNA-Seq analysis resource for infectious disease research.
Warren, Andrew S; Aurrecoechea, Cristina; Brunk, Brian; Desai, Prerak; Emrich, Scott; Giraldo-Calderón, Gloria I; Harb, Omar; Hix, Deborah; Lawson, Daniel; Machi, Dustin; Mao, Chunhong; McClelland, Michael; Nordberg, Eric; Shukla, Maulik; Vosshall, Leslie B; Wattam, Alice R; Will, Rebecca; Yoo, Hyun Seung; Sobral, Bruno
2015-05-01
RNA-Seq is a method for profiling transcription using high-throughput sequencing and is an important component of many research projects that wish to study transcript isoforms, condition specific expression and transcriptional structure. The methods, tools and technologies used to perform RNA-Seq analysis continue to change, creating a bioinformatics challenge for researchers who wish to exploit these data. Resources that bring together genomic data, analysis tools, educational material and computational infrastructure can minimize the overhead required of life science researchers. RNA-Rocket is a free service that provides access to RNA-Seq and ChIP-Seq analysis tools for studying infectious diseases. The site makes available thousands of pre-indexed genomes, their annotations and the ability to stream results to the bioinformatics resources VectorBase, EuPathDB and PATRIC. The site also provides a combination of experimental data and metadata, examples of pre-computed analysis, step-by-step guides and a user interface designed to enable both novice and experienced users of RNA-Seq data. RNA-Rocket is available at rnaseq.pathogenportal.org. Source code for this project can be found at github.com/cidvbi/PathogenPortal. anwarren@vt.edu Supplementary materials are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
External Data and Attribute Hyperlink Programs for Promis*e(Registered Trademark)
NASA Technical Reports Server (NTRS)
Derengowski, Rich; Gruel, Andrew
2001-01-01
External Data and Attribute Hyperlink are computer programs that can be added to Promis*e(trademark) which is a commercial software system that automates routine tasks in the design (including drawing schematic diagrams) of electrical control systems. The programs were developed under the Stennis Space Center's (SSC) Dual Use Technology Development Program to provide capabilities for SSC's BMCS configuration management system which uses Promis*e(trademark). The External Data program enables the storage and management of information in an external database linked to a drawing. Changes can be made either in the database or on the drawing. Information that originates outside Promis*e(trademark) can be stored in custom fields that can be added to the database. Although this information is not available in Promis*e(trademark) printed drawings, it can be associated with symbols in the drawings, and can be retrieved through the drawings when the software is running. The Attribute Hyperlink program enables the addition of hyperlink information as attributes of symbols. This program enables the formation of a direct hyperlink between a schematic diagram and an Internet site or a file on a compact disk, on the user's hard drive, or on another computer on a network to which the user's computer is connected. The user can then obtain information directly related to the part (e.g., maintenance, or troubleshooting information) associated with the hyperlink.
Measuring use patterns of online journals and databases
De Groote, Sandra L.; Dorsch, Josephine L.
2003-01-01
Purpose: This research sought to determine use of online biomedical journals and databases and to assess current user characteristics associated with the use of online resources in an academic health sciences center. Setting: The Library of the Health Sciences–Peoria is a regional site of the University of Illinois at Chicago (UIC) Library with 350 print journals, more than 4,000 online journals, and multiple online databases. Methodology: A survey was designed to assess online journal use, print journal use, database use, computer literacy levels, and other library user characteristics. A survey was sent through campus mail to all (471) UIC Peoria faculty, residents, and students. Results: Forty-one percent (188) of the surveys were returned. Ninety-eight percent of the students, faculty, and residents reported having convenient access to a computer connected to the Internet. While 53% of the users indicated they searched MEDLINE at least once a week, other databases showed much lower usage. Overall, 71% of respondents indicated a preference for online over print journals when possible. Conclusions: Users prefer online resources to print, and many choose to access these online resources remotely. Convenience and full-text availability appear to play roles in selecting online resources. The findings of this study suggest that databases without links to full text and online journal collections without links from bibliographic databases will have lower use. These findings have implications for collection development, promotion of library resources, and end-user training. PMID:12883574
NASA Astrophysics Data System (ADS)
Weiland, C.; Chadwick, W. W.
2004-12-01
Several years ago we created an exciting and engaging multimedia exhibit for the Hatfield Marine Science Center that lets visitors simulate making a dive to the seafloor with the remotely operated vehicle (ROV) named ROPOS. The exhibit immerses the user in an interactive experience that is naturally fun but also educational. The public display is located at the Hatfield Marine Science Visitor Center in Newport, Oregon. We are now completing a revision to the project that will make this engaging virtual exploration accessible to a much larger audience. With minor modifications we will be able to put the exhibit onto the world wide web so that any person with internet access can view and learn about exciting volcanic and hydrothermal activity at Axial Seamount on the Juan de Fuca Ridge. The modifications address some cosmetic and logistic ISSUES confronted in the museum environment, but will mainly involve compressing video clips so they can be delivered more efficiently over the internet. The web version, like the museum version, will allow users to choose from 1 of 3 different dives sites in the caldera of Axial Volcano. The dives are based on real seafloor settings at Axial seamount, an active submarine volcano on the Juan de Fuca Ridge (NE Pacific) that is also the location of a seafloor observatory called NeMO. Once a dive is chosen, then the user watches ROPOS being deployed and then arrives into a 3-D computer-generated seafloor environment that is based on the real world but is easier to visualize and navigate. Once on the bottom, the user is placed within a 360 degree panorama and can look in all directions by manipulating the computer mouse. By clicking on markers embedded in the scene, the user can then either move to other panorama locations via movies that travel through the 3-D virtual environment, or they can play video clips from actual ROPOS dives specifically related to that scene. Audio accompanying the video clips informs the user where they are going or what they are looking at. After the user is finished exploring the dive site they end the dive by leaving the bottom and watching the ROV being recovered onto the ship at the surface. Within the three simulated dives there are a total of 6 arrival and departure movies, 7 seafloor panoramas, 12 travel movies, and 23 ROPOS video clips. This virtual exploration is part of the NeMO web site and will be at this URL http://www.pmel.noaa.gov/vents/dive.html
User-Centered Computer Aided Language Learning
ERIC Educational Resources Information Center
Zaphiris, Panayiotis, Ed.; Zacharia, Giorgos, Ed.
2006-01-01
In the field of computer aided language learning (CALL), there is a need for emphasizing the importance of the user. "User-Centered Computer Aided Language Learning" presents methodologies, strategies, and design approaches for building interfaces for a user-centered CALL environment, creating a deeper understanding of the opportunities and…
Large Data at Small Universities: Astronomical processing using a computer classroom
NASA Astrophysics Data System (ADS)
Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen
2016-06-01
The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.
User Vulnerability and its Reduction on a Social Networking Site
2014-01-01
social networking sites bring about new...and explore other users’ profiles and friend networks. Social networking sites have reshaped business models [Vayner- chuk 2009], provided platform... social networking sites is to enable users to be more social, user privacy and security issues cannot be ignored. On one hand, most social networking sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, James; Wang, Yilin; Walter, Eric D.
The hydrothermal stability of Cu/SSZ-13 SCR catalysts has been extensively studied, yet atomic level understanding of changes to the zeolite support and the Cu active sites during hydrothermal aging are still lacking. In this work, via the utilization of spectroscopic methods including solid-state 27Al and 29Si NMR, EPR, DRIFTS, and XPS, together with imaging and elemental mapping using STEM, detailed kinetic analyses, and theoretical calculations with DFT, various Cu species, including two types of isolated active sites and CuOx clusters, were precisely quantified for samples hydrothermally aged under varying conditions. This quantification convincingly confirms the exceptional hydrothermal stability of isolatedmore » Cu2+-2Z sites, and the gradual conversion of [Cu(OH)]+-Z to CuOx clusters with increasing aging severity. This stability difference is rationalized from the hydrolysis activation barrier difference between the two isolated sites via DFT. Discussions are provided on the nature of the CuOx clusters, and their possible detrimental roles on catalyst stability. Finally, a few rational design principles for Cu/SSZ-13 are derived rigorously from the atomic-level understanding of this catalyst obtained here. The authors gratefully acknowledge the US Department of Energy (DOE), Energy Efficiency and Renewable Energy, Vehicle Technologies Office for the support of this work. Computing time was granted by a user proposal at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC). The experimental studies described in this paper were performed in the EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory (PNNL). PNNL is operated for the US DOE by Battelle.« less
Graphical User Interface Programming in Introductory Computer Science.
ERIC Educational Resources Information Center
Skolnick, Michael M.; Spooner, David L.
Modern computing systems exploit graphical user interfaces for interaction with users; as a result, introductory computer science courses must begin to teach the principles underlying such interfaces. This paper presents an approach to graphical user interface (GUI) implementation that is simple enough for beginning students to understand, yet…
Online Remote Sensing Interface
NASA Technical Reports Server (NTRS)
Lawhead, Joel
2007-01-01
BasinTools Module 1 processes remotely sensed raster data, including multi- and hyper-spectral data products, via a Web site with no downloads and no plug-ins required. The interface provides standardized algorithms designed so that a user with little or no remote-sensing experience can use the site. This Web-based approach reduces the amount of software, hardware, and computing power necessary to perform the specified analyses. Access to imagery and derived products is enterprise-level and controlled. Because the user never takes possession of the imagery, the licensing of the data is greatly simplified. BasinTools takes the "just-in-time" inventory control model from commercial manufacturing and applies it to remotely-sensed data. Products are created and delivered on-the-fly with no human intervention, even for casual users. Well-defined procedures can be combined in different ways to extend verified and validated methods in order to derive new remote-sensing products, which improves efficiency in any well-defined geospatial domain. Remote-sensing products produced in BasinTools are self-documenting, allowing procedures to be independently verified or peer-reviewed. The software can be used enterprise-wide to conduct low-level remote sensing, viewing, sharing, and manipulating of image data without the need for desktop applications.
Knowledge Provenance in Semantic Wikis
NASA Astrophysics Data System (ADS)
Ding, L.; Bao, J.; McGuinness, D. L.
2008-12-01
Collaborative online environments with a technical Wiki infrastructure are becoming more widespread. One of the strengths of a Wiki environment is that it is relatively easy for numerous users to contribute original content and modify existing content (potentially originally generated by others). As more users begin to depend on informational content that is evolving by Wiki communities, it becomes more important to track the provenance of the information. Semantic Wikis expand upon traditional Wiki environments by adding some computationally understandable encodings of some of the terms and relationships in Wikis. We have developed a semantic Wiki environment that expands a semantic Wiki with provenance markup. Provenance of original contributions as well as modifications is encoded using the provenance markup component of the Proof Markup Language. The Wiki environment provides the provenance markup automatically, thus users are not required to make specific encodings of author, contribution date, and modification trail. Further, our Wiki environment includes a search component that understands the provenance primitives and thus can be used to provide a provenance-aware search facility. We will describe the knowledge provenance infrastructure of our Semantic Wiki and show how it is being used as the foundation of our group web site as well as a number of project web sites.
Integrating Radar Image Data with Google Maps
NASA Technical Reports Server (NTRS)
Chapman, Bruce D.; Gibas, Sarah
2010-01-01
A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.
Contributing opportunistic resources to the grid with HTCondor-CE-Bosco
NASA Astrophysics Data System (ADS)
Weitzel, Derek; Bockelman, Brian
2017-10-01
The HTCondor-CE [1] is the primary Compute Element (CE) software for the Open Science Grid. While it offers many advantages for large sites, for smaller, WLCG Tier-3 sites or opportunistic clusters, it can be a difficult task to install, configure, and maintain the HTCondor-CE. Installing a CE typically involves understanding several pieces of software, installing hundreds of packages on a dedicated node, updating several configuration files, and implementing grid authentication mechanisms. On the other hand, accessing remote clusters from personal computers has been dramatically improved with Bosco: site admins only need to setup SSH public key authentication and appropriate accounts on a login host. In this paper, we take a new approach with the HTCondor-CE-Bosco, a CE which combines the flexibility and reliability of the HTCondor-CE with the easy-to-install Bosco. The administrators of the opportunistic resource are not required to install any software: only SSH access and a user account are required from the host site. The OSG can then run the grid-specific portions from a central location. This provides a new, more centralized, model for running grid services, which complements the traditional distributed model. We will show the architecture of a HTCondor-CE-Bosco enabled site, as well as feedback from multiple sites that have deployed it.
HappyFace as a generic monitoring tool for HEP experiments
NASA Astrophysics Data System (ADS)
Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Quadt, Arnulf; Rzehorz, Gerhard
2015-12-01
The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexity. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and stores the information and the status of different HEP monitoring resources into the common database of HappyFace. The system displays the information and the status through a single interface. However, this model of HappyFace relied on the monitoring resources which are always under development in the HEP experiments. Consequently, HappyFace needed to have direct access methods to the grid application and grid service layers in the different HEP grid systems. To cope with this issue, we use a reliable HEP software repository, the CernVM File System. We propose a new implementation and an architecture of HappyFace, the so-called grid-enabled HappyFace. It allows its basic framework to connect directly to the grid user applications and the grid collective services, without involving the monitoring resources in the HEP grid systems. This approach gives HappyFace several advantages: Portability, to provide an independent and generic monitoring system among the HEP grid systems. Eunctionality, to allow users to perform various diagnostic tools in the individual HEP grid systems and grid sites. Elexibility, to make HappyFace beneficial and open for the various distributed grid computing environments. Different grid-enabled modules, to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites, have been implemented. The new HappyFace system has been successfully integrated and now it displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services.
Astronomical Research with the MicroObservatory Net
NASA Astrophysics Data System (ADS)
Brecher, K.; Sadler, P.; Gould, R.; Leiker, S.; Antonucci, P.; Deutsch, F.
1997-05-01
We have developed a fully integrated automated astronomical telescope system which combines the imaging power of a cooled CCD, with a self-contained and weatherized 15 cm reflecting optical telescope and mount. The MicroObservatory Net consists of five of these telescopes. They are currently being deployed around the world at widely distributed longitudes. Remote access to the MicroObservatories over the Internet has now been implemented. Software for computer control, pointing, focusing, filter selection as well as pattern recognition have all been developed as part of the project. The telescopes can be controlled in real time or in delay mode, from a Macintosh, PC or other computer using Web-based software. The Internet address of the telescopes is http://cfa- www.harvard.edu/cfa/sed/MicroObservatory/MicroObservatory.html. In the real-time mode, individuals have access to all of the telescope control functions without the need for an `on-site' operator. Users can sign up for a specific period of ti me. In the batch mode, users can submit requests for delayed telescope observations. After a MicroObservatory completes a job, the user is automatically notified by e-mail that the image is available for viewing and downloading from the Web site. The telescopes were designed for classroom instruction, as well as for use by students and amateur astronomers for original scientific research projects. We are currently examining a variety of technical and educational questions about the use of the telescopes including: (1) What are the best approaches to scheduling real-time versus batch mode observations? (2) What criteria should be used for allocating telescope time? (3) With deployment of more than one telescope, is it advantageous for each telescope to be used for just one type of observation, i.e., some for photometric use, others for imaging? And (4) What are the most valuable applications of the MicroObservatories in astronomical research? Support for the MicroObservatory Net has been provided by the NSF, Apple Computer, Inc. and Kodak, Inc.
Comparison of Computer-based Clinical Decision Support Systems and Content for Diabetes Mellitus.
Kantor, M; Wright, A; Burton, M; Fraser, G; Krall, M; Maviglia, S; Mohammed-Rajput, N; Simonaitis, L; Sonnenberg, F; Middleton, B
2011-01-01
Computer-based clinical decision support (CDS) systems have been shown to improve quality of care and workflow efficiency, and health care reform legislation relies on electronic health records and CDS systems to improve the cost and quality of health care in the United States; however, the heterogeneity of CDS content and infrastructure of CDS systems across sites is not well known. We aimed to determine the scope of CDS content in diabetes care at six sites, assess the capabilities of CDS in use at these sites, characterize the scope of CDS infrastructure at these sites, and determine how the sites use CDS beyond individual patient care in order to identify characteristics of CDS systems and content that have been successfully implemented in diabetes care. We compared CDS systems in six collaborating sites of the Clinical Decision Support Consortium. We gathered CDS content on care for patients with diabetes mellitus and surveyed institutions on characteristics of their site, the infrastructure of CDS at these sites, and the capabilities of CDS at these sites. The approach to CDS and the characteristics of CDS content varied among sites. Some commonalities included providing customizability by role or user, applying sophisticated exclusion criteria, and using CDS automatically at the time of decision-making. Many messages were actionable recommendations. Most sites had monitoring rules (e.g. assessing hemoglobin A1c), but few had rules to diagnose diabetes or suggest specific treatments. All sites had numerous prevention rules including reminders for providing eye examinations, influenza vaccines, lipid screenings, nephropathy screenings, and pneumococcal vaccines. Computer-based CDS systems vary widely across sites in content and scope, but both institution-created and purchased systems had many similar features and functionality, such as integration of alerts and reminders into the decision-making workflow of the provider and providing messages that are actionable recommendations.
Home Computer and Internet User Security
2005-01-01
Information Security Model © 2005 Carnegie Mellon University (Lawrence R. Rogers, Author) Home Computer and Internet User Security...Carnegie Mellon University (Lawrence R. Rogers, Author) Home Computer and Internet User Security Version 1.0.4 – slide 50 Contact Information Lawrence R. Rogers • Email: cert@cert.org CERT website: http://www.cert.org/ ...U.S. Patent and Trademark Office Home Computer and Internet User Security Report Documentation Page Form ApprovedOMB
ReOpt[trademark] V2.0 user guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, M K; Bryant, J L
1992-10-01
Cleaning up the large number of contaminated waste sites at Department of Energy (DOE) facilities in the US presents a large and complex problem. Each waste site poses a singular set of circumstances (different contaminants, environmental concerns, and regulations) that affect selection of an appropriate response. Pacific Northwest Laboratory (PNL) developed ReOpt to provide information about the remedial action technologies that are currently available. It is an easy-to-use personal computer program and database that contains data about these remedial technologies and auxiliary data about contaminants and regulations. ReOpt will enable engineers and planners involved in environmental restoration efforts to quicklymore » identify potentially applicable environmental restoration technologies and access corresponding information required to select cleanup activities for DOE sites.« less
Smith, S. Jerrod; Esralew, Rachel A.
2010-01-01
The USGS Streamflow Statistics (StreamStats) Program was created to make geographic information systems-based estimation of streamflow statistics easier, faster, and more consistent than previously used manual techniques. The StreamStats user interface is a map-based internet application that allows users to easily obtain streamflow statistics, basin characteristics, and other information for user-selected U.S. Geological Survey data-collection stations and ungaged sites of interest. The application relies on the data collected at U.S. Geological Survey streamflow-gaging stations, computer aided computations of drainage-basin characteristics, and published regression equations for several geographic regions comprising the United States. The StreamStats application interface allows the user to (1) obtain information on features in selected map layers, (2) delineate drainage basins for ungaged sites, (3) download drainage-basin polygons to a shapefile, (4) compute selected basin characteristics for delineated drainage basins, (5) estimate selected streamflow statistics for ungaged points on a stream, (6) print map views, (7) retrieve information for U.S. Geological Survey streamflow-gaging stations, and (8) get help on using StreamStats. StreamStats was designed for national application, with each state, territory, or group of states responsible for creating unique geospatial datasets and regression equations to compute selected streamflow statistics. With the cooperation of the Oklahoma Department of Transportation, StreamStats has been implemented for Oklahoma and is available at http://water.usgs.gov/osw/streamstats/. The Oklahoma StreamStats application covers 69 processed hydrologic units and most of the state of Oklahoma. Basin characteristics available for computation include contributing drainage area, contributing drainage area that is unregulated by Natural Resources Conservation Service floodwater retarding structures, mean-annual precipitation at the drainage-basin outlet for the period 1961-1990, 10-85 channel slope (slope between points located at 10 percent and 85 percent of the longest flow-path length upstream from the outlet), and percent impervious area. The Oklahoma StreamStats application interacts with the National Streamflow Statistics database, which contains the peak-flow regression equations in a previously published report. Fourteen peak-flow (flood) frequency statistics are available for computation in the Oklahoma StreamStats application. These statistics include the peak flow at 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals for rural, unregulated streams; and the peak flow at 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals for rural streams that are regulated by Natural Resources Conservation Service floodwater retarding structures. Basin characteristics and streamflow statistics cannot be computed for locations in playa basins (mostly in the Oklahoma Panhandle) and along main stems of the largest river systems in the state, namely the Arkansas, Canadian, Cimarron, Neosho, Red, and Verdigris Rivers, because parts of the drainage areas extend outside of the processed hydrologic units.
NASA Astrophysics Data System (ADS)
Weiland, C.; Chadwick, W. W.; Hanshumaker, W.; Osis, V.; Hamilton, C.
2002-12-01
We have created a new interactive exhibit in which the user can sit down and simulate that they are making a dive to the seafloor with the remotely operated vehicle (ROV) named ROPOS. The exhibit immerses the user in an interactive experience that is naturally fun but also educational. This new public display is located at the Hatfield Marine Science Visitor Center in Newport, Oregon. The exhibit is designed to look like the real ROPOS control console and includes three video monitors, a PC, a DVD player, an overhead speaker, graphic panels, buttons, lights, dials, and a seat in front of a joystick. The dives are based on real seafloor settings at Axial seamount, an active submarine volcano on the Juan de Fuca Ridge (NE Pacific) that is also the location of a seafloor observatory called NeMO. The user can choose between 1 of 3 different dives sites in the caldera of Axial Volcano. Once a dive is chosen, then the user watches ROPOS being deployed and then arrives into a 3-D computer-generated seafloor environment that is based on the real world but is easier to visualize and navigate. Once on the bottom, the user is placed within a 360 degree panorama and can look in all directions by manipulating the joystick. By clicking on markers embedded in the scene, the user can then either move to other panorama locations via movies that travel through the 3-D virtual environment, or they can play video clips from actual ROPOS dives specifically related to that scene. Audio accompanying the video clips informs the user where they are going or what they are looking at. After the user is finished exploring the dive site they end the dive by leaving the bottom and watching the ROV being recovered onto the ship at the surface. The user can then choose a different dive or make the same dive again. Within the three simulated dives there are a total of 6 arrival and departure movies, 7 seafloor panoramas, 12 travel movies, and 23 ROPOS video clips. The exhibit software was created with Macromedia Director using Apple Quicktime and Quicktime VR. The exhibit is based on the NeMO Explorer web site (http://www.pmel.noaa.gov/vents/nemo/explorer.html).
Identification of genomic sites for CRISPR/Cas9-based genome editing in the Vitis vinifera genome.
Wang, Yi; Liu, Xianju; Ren, Chong; Zhong, Gan-Yuan; Yang, Long; Li, Shaohua; Liang, Zhenchang
2016-04-21
CRISPR/Cas9 has been recently demonstrated as an effective and popular genome editing tool for modifying genomes of humans, animals, microorganisms, and plants. Success of such genome editing is highly dependent on the availability of suitable target sites in the genomes to be edited. Many specific target sites for CRISPR/Cas9 have been computationally identified for several annual model and crop species, but such sites have not been reported for perennial, woody fruit species. In this study, we identified and characterized five types of CRISPR/Cas9 target sites in the widely cultivated grape species Vitis vinifera and developed a user-friendly database for editing grape genomes in the future. A total of 35,767,960 potential CRISPR/Cas9 target sites were identified from grape genomes in this study. Among them, 22,597,817 target sites were mapped to specific genomic locations and 7,269,788 were found to be highly specific. Protospacers and PAMs were found to distribute uniformly and abundantly in the grape genomes. They were present in all the structural elements of genes with the coding region having the highest abundance. Five PAM types, TGG, AGG, GGG, CGG and NGG, were observed. With the exception of the NGG type, they were abundantly present in the grape genomes. Synteny analysis of similar genes revealed that the synteny of protospacers matched the synteny of homologous genes. A user-friendly database containing protospacers and detailed information of the sites was developed and is available for public use at the Grape-CRISPR website ( http://biodb.sdau.edu.cn/gc/index.html ). Grape genomes harbour millions of potential CRISPR/Cas9 target sites. These sites are widely distributed among and within chromosomes with predominant abundance in the coding regions of genes. We developed a publicly-accessible Grape-CRISPR database for facilitating the use of the CRISPR/Cas9 system as a genome editing tool for functional studies and molecular breeding of grapes. Among other functions, the database allows users to identify and select multi-protospacers for editing similar sequences in grape genomes simultaneously.
Migrating the STARLINK Network from VMS to Unix
NASA Astrophysics Data System (ADS)
Clayton, C.
The Starlink Project is a UK-wide astronomical computing service consisting of a network of computers used by UK astronomers at over 25 sites, a collection of software to calibrate and analyze astronomical data, and a team of people to give hardware, software, and administrative support. In order to exploit the most cost-effective hardware and to maintain compatibility with the international community, Starlink is migrating from an entirely VAX/VMS based service to UNIX-based systems. This migration is almost complete, and this paper describes some of the solutions adopted for the wide variety of problems which were encountered. Migration of the hardware platform is discussed first. Equipment which can be re-used under Unix is identified. System software and non-astronomical applications which are required to allow a smooth transition from VMS to Unix are considered next. While many VMS functions can be replaced with Unix equivalents, it has become apparent that there is a small number of key VMS applications which must be provided on the replacement Unix platform to avoid considerable disruption to users. Various strategies for moving the users themselves from VMS to UNIX are considered and their relative merits compared. Fast migration routes are considered to be more effective as long as certain key applications and user aids are already in place. The porting of the Starlink Software Collection is discussed, as is the problem of migrating large quantities of private user code.
Centralized Authorization Using a Direct Service, Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wachsmann, A
Authorization is the process of deciding if entity X is allowed to have access to resource Y. Determining the identity of X is the job of the authentication process. One task of authorization in computer networks is to define and determine which user has access to which computers in the network. On Linux, the tendency exists to create a local account for each single user who should be allowed to logon to a computer. This is typically the case because a user not only needs login privileges to a computer but also additional resources like a home directory to actuallymore » do some work. Creating a local account on every computer takes care of all this. The problem with this approach is that these local accounts can be inconsistent with each other. The same user name could have a different user ID and/or group ID on different computers. Even more problematic is when two different accounts share the same user ID and group ID on different computers: User joe on computer1 could have user ID 1234 and group ID 56 and user jane on computer2 could have the same user ID 1234 and group ID 56. This is a big security risk in case shared resources like NFS are used. These two different accounts are the same for an NFS server so that these users can wipe out each other's files. The solution to this inconsistency problem is to have only one central, authoritative data source for this kind of information and a means of providing all your computers with access to this central source. This is what a ''Directory Service'' is. The two directory services most widely used for centralizing authorization data are the Network Information Service (NIS, formerly known as Yellow Pages or YP) and Lightweight Directory Access Protocol (LDAP).« less
Neurophysiological model of the normal and abnormal human pupil
NASA Technical Reports Server (NTRS)
Krenz, W.; Robin, M.; Barez, S.; Stark, L.
1985-01-01
Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.
Web-based X-ray quality control documentation.
David, George; Burnett, Lou Ann; Schenkel, Robert
2003-01-01
The department of radiology at the Medical College of Georgia Hospital and Clinics has developed an equipment quality control web site. Our goal is to provide immediate access to virtually all medical physics survey data. The web site is designed to assist equipment engineers, department management and technologists. By improving communications and access to equipment documentation, we believe productivity is enhanced. The creation of the quality control web site was accomplished in three distinct steps. First, survey data had to be placed in a computer format. The second step was to convert these various computer files to a format supported by commercial web browsers. Third, a comprehensive home page had to be designed to provide convenient access to the multitude of surveys done in the various x-ray rooms. Because we had spent years previously fine-tuning the computerization of the medical physics quality control program, most survey documentation was already in spreadsheet or database format. A major technical decision was the method of conversion of survey spreadsheet and database files into documentation appropriate for the web. After an unsatisfactory experience with a HyperText Markup Language (HTML) converter (packaged with spreadsheet and database software), we tried creating Portable Document Format (PDF) files using Adobe Acrobat software. This process preserves the original formatting of the document and takes no longer than conventional printing; therefore, it has been very successful. Although the PDF file generated by Adobe Acrobat is a proprietary format, it can be displayed through a conventional web browser using the freely distributed Adobe Acrobat Reader program that is available for virtually all platforms. Once a user installs the software, it is automatically invoked by the web browser whenever the user follows a link to a file with a PDF extension. Although no confidential patient information is available on the web site, our legal department recommended that we secure the site in order to keep out those wishing to make mischief. Our interim solution has not been to password protect the page, which we feared would hinder access for occasional legitimate users, but also not to provide links to it from other hospital and department pages. Utility and productivity were improved and time and money were saved by making radiological equipment quality control documentation instantly available on-line.
Uniformity on the grid via a configuration framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Igor V Terekhov et al.
2003-03-11
As Grid permeates modern computing, Grid solutions continue to emerge and take shape. The actual Grid development projects continue to provide higher-level services that evolve in functionality and operate with application-level concepts which are often specific to the virtual organizations that use them. Physically, however, grids are comprised of sites whose resources are diverse and seldom project readily onto a grid's set of concepts. In practice, this also creates problems for site administrators who actually instantiate grid services. In this paper, we present a flexible, uniform framework to configure a grid site and its facilities, and otherwise describe the resourcesmore » and services it offers. We start from a site configuration and instantiate services for resource advertisement, monitoring and data handling; we also apply our framework to hosting environment creation. We use our ideas in the Information Management part of the SAM-Grid project, a grid system which will deliver petabyte-scale data to the hundreds of users. Our users are High Energy Physics experimenters who are scattered worldwide across dozens of institutions and always use facilities that are shared with other experiments as well as other grids. Our implementation represents information in the XML format and includes tools written in XQuery and XSLT.« less
Computer use changes generalization of movement learning.
Wei, Kunlin; Yan, Xiang; Kong, Gaiqing; Yin, Cong; Zhang, Fan; Wang, Qining; Kording, Konrad Paul
2014-01-06
Over the past few decades, one of the most salient lifestyle changes for us has been the use of computers. For many of us, manual interaction with a computer occupies a large portion of our working time. Through neural plasticity, this extensive movement training should change our representation of movements (e.g., [1-3]), just like search engines affect memory [4]. However, how computer use affects motor learning is largely understudied. Additionally, as virtually all participants in studies of perception and actions are computer users, a legitimate question is whether insights from these studies bear the signature of computer-use experience. We compared non-computer users with age- and education-matched computer users in standard motor learning experiments. We found that people learned equally fast but that non-computer users generalized significantly less across space, a difference negated by two weeks of intensive computer training. Our findings suggest that computer-use experience shaped our basic sensorimotor behaviors, and this influence should be considered whenever computer users are recruited as study participants. Copyright © 2014 Elsevier Ltd. All rights reserved.
Experience in the application of Java Technologies in telemedicine
Fedyukin, IV; Reviakin, YG; Orlov, OI; Doarn, CR; Harnett, BM; Merrell, RC
2002-01-01
Java language has been demonstrated to be an effective tool in supporting medical image viewing in Russia. This evaluation was completed by obtaining a maximum of 20 images, depending on the client's computer workstation from one patient using a commercially available computer tomography (CT) scanner. The images were compared against standard CT images that were viewed at the site of capture. There was no appreciable difference. The client side is a lightweight component that provides an intuitive interface for end users. Each image is loaded in its own thread and the user can begin work after the first image has been loaded. This feature is especially useful on slow connection speed, 9.6 Kbps for example. The server side, which is implemented by the Java Servlet Engine works more effective than common gateway interface (CGI) programs do. Advantages of the Java Technology place this program on the next level of application development. This paper presents a unique application of Java in telemedicine. PMID:12459045
Experience in the application of Java Technologies in telemedicine.
Fedyukin, IV; Reviakin, YG; Orlov, OI; Doarn, CR; Harnett, BM; Merrell, RC
2002-09-17
Java language has been demonstrated to be an effective tool in supporting medical image viewing in Russia. This evaluation was completed by obtaining a maximum of 20 images, depending on the client's computer workstation from one patient using a commercially available computer tomography (CT) scanner. The images were compared against standard CT images that were viewed at the site of capture. There was no appreciable difference. The client side is a lightweight component that provides an intuitive interface for end users. Each image is loaded in its own thread and the user can begin work after the first image has been loaded. This feature is especially useful on slow connection speed, 9.6 Kbps for example. The server side, which is implemented by the Java Servlet Engine works more effective than common gateway interface (CGI) programs do. Advantages of the Java Technology place this program on the next level of application development. This paper presents a unique application of Java in telemedicine.
Can virtual reality be used to conduct mass prophylaxis clinic training? A pilot program.
Yellowlees, Peter; Cook, James N; Marks, Shayna L; Wolfe, Daniel; Mangin, Elanor
2008-03-01
To create and evaluate a pilot bioterrorism defense training environment using virtual reality technology. The present pilot project used Second Life, an internet-based virtual world system, to construct a virtual reality environment to mimic an actual setting that might be used as a Strategic National Stockpile (SNS) distribution site for northern California in the event of a bioterrorist attack. Scripted characters were integrated into the system as mock patients to analyze various clinic workflow scenarios. Users tested the virtual environment over two sessions. Thirteen users who toured the environment were asked to complete an evaluation survey. Respondents reported that the virtual reality system was relevant to their practice and had potential as a method of bioterrorism defense training. Computer simulations of bioterrorism defense training scenarios are feasible with existing personal computer technology. The use of internet-connected virtual environments holds promise for bioterrorism defense training. Recommendations are made for public health agencies regarding the implementation and benefits of using virtual reality for mass prophylaxis clinic training.
Cameo: A Python Library for Computer Aided Metabolic Engineering and Optimization of Cell Factories.
Cardoso, João G R; Jensen, Kristian; Lieven, Christian; Lærke Hansen, Anne Sofie; Galkina, Svetlana; Beber, Moritz; Özdemir, Emre; Herrgård, Markus J; Redestig, Henning; Sonnenschein, Nikolaus
2018-04-20
Computational systems biology methods enable rational design of cell factories on a genome-scale and thus accelerate the engineering of cells for the production of valuable chemicals and proteins. Unfortunately, the majority of these methods' implementations are either not published, rely on proprietary software, or do not provide documented interfaces, which has precluded their mainstream adoption in the field. In this work we present cameo, a platform-independent software that enables in silico design of cell factories and targets both experienced modelers as well as users new to the field. It is written in Python and implements state-of-the-art methods for enumerating and prioritizing knockout, knock-in, overexpression, and down-regulation strategies and combinations thereof. Cameo is an open source software project and is freely available under the Apache License 2.0. A dedicated Web site including documentation, examples, and installation instructions can be found at http://cameo.bio . Users can also give cameo a try at http://try.cameo.bio .
A Freeware Path to Neutron Computed Tomography
NASA Astrophysics Data System (ADS)
Schillinger, Burkhard; Craft, Aaron E.
Neutron computed tomography has become a routine method at many neutron sources due to the availability of digital detection systems, powerful computers and advanced software. The commercial packages Octopus by Inside Matters and VGStudio by Volume Graphics have been established as a quasi-standard for high-end computed tomography. However, these packages require a stiff investment and are available to the users only on-site at the imaging facility to do their data processing. There is a demand from users to have image processing software at home to do further data processing; in addition, neutron computed tomography is now being introduced even at smaller and older reactors. Operators need to show a first working tomography setup before they can obtain a budget to build an advanced tomography system. Several packages are available on the web for free; however, these have been developed for X-rays or synchrotron radiation and are not immediately useable for neutron computed tomography. Three reconstruction packages and three 3D-viewers have been identified and used even for Gigabyte datasets. This paper is not a scientific publication in the classic sense, but is intended as a review to provide searchable help to make the described packages usable for the tomography community. It presents the necessary additional preprocessing in ImageJ, some workarounds for bugs in the software, and undocumented or badly documented parameters that need to be adapted for neutron computed tomography. The result is a slightly complicated, but surprisingly high-quality path to neutron computed tomography images in 3D, but not a replacement for the even more powerful commercial software mentioned above.
US hydropower resource assessment for Hawaii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Francfort, J.E.
1996-09-01
US DOE is developing an estimate of the undeveloped hydropower potential in US. The Hydropower Evaluation Software (HES) is a computer model developed by INEL for this purpose. HES measures the undeveloped hydropower resources available in US, using uniform criteria for measurement. The software was tested using hydropower information and data provided by Southwestern Power Administration. It is a menu-driven program that allows the PC user to assign environmental attributes to potential hydropower sites, calculate development suitability factors for each site based on the environmental attributes, and generate reports. This report describes the resource assessment results for the State ofmore » Hawaii.« less
PINTA: a web server for network-based gene prioritization from expression data
Nitsch, Daniela; Tranchevent, Léon-Charles; Gonçalves, Joana P.; Vogt, Josef Korbinian; Madeira, Sara C.; Moreau, Yves
2011-01-01
PINTA (available at http://www.esat.kuleuven.be/pinta/; this web site is free and open to all users and there is no login requirement) is a web resource for the prioritization of candidate genes based on the differential expression of their neighborhood in a genome-wide protein–protein interaction network. Our strategy is meant for biological and medical researchers aiming at identifying novel disease genes using disease specific expression data. PINTA supports both candidate gene prioritization (starting from a user defined set of candidate genes) as well as genome-wide gene prioritization and is available for five species (human, mouse, rat, worm and yeast). As input data, PINTA only requires disease specific expression data, whereas various platforms (e.g. Affymetrix) are supported. As a result, PINTA computes a gene ranking and presents the results as a table that can easily be browsed and downloaded by the user. PMID:21602267
MailMinder: taming DHCP's mailman interface.
Shultz, E K; Brown, R; Kotta, G
1992-01-01
While the Department of Veteran's Affairs Decentralized Hospital Computer Program (DHCP) is one of the most widely disseminated and successful hospital information systems in existence, it currently is accessed through a user interface which is not as mature as the rest of the system. This interface is a VT-100 compatible, character oriented interface using menus accessed by typed commands for feature access. This project demonstrated that a mature graphical user interface (MailMinder) can be successfully used as a "front-end" to DHCP. MailMinder is completely compatible with the existing unmodified DHCP electronic mail program, Mailman. MailMinder allows the user to be more efficient than the current interface and offers additional features over the current mail system. The program has undergone evaluation and limited deployment at five separate sites. The feature set of this program and its operation will be shown at this demonstration. The demonstration has implications for all current hospital information systems.
MailMinder: taming DHCP's mailman interface.
Shultz, E. K.; Brown, R.; Kotta, G.
1992-01-01
While the Department of Veteran's Affairs Decentralized Hospital Computer Program (DHCP) is one of the most widely disseminated and successful hospital information systems in existence, it currently is accessed through a user interface which is not as mature as the rest of the system. This interface is a VT-100 compatible, character oriented interface using menus accessed by typed commands for feature access. This project demonstrated that a mature graphical user interface (MailMinder) can be successfully used as a "front-end" to DHCP. MailMinder is completely compatible with the existing unmodified DHCP electronic mail program, Mailman. MailMinder allows the user to be more efficient than the current interface and offers additional features over the current mail system. The program has undergone evaluation and limited deployment at five separate sites. The feature set of this program and its operation will be shown at this demonstration. The demonstration has implications for all current hospital information systems. PMID:1482995
Controlling Laboratory Processes From A Personal Computer
NASA Technical Reports Server (NTRS)
Will, H.; Mackin, M. A.
1991-01-01
Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.
DeRobertis, Christopher V.; Lu, Yantian T.
2010-02-23
A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.
Automated Help System For A Supercomputer
NASA Technical Reports Server (NTRS)
Callas, George P.; Schulbach, Catherine H.; Younkin, Michael
1994-01-01
Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.
Cho, Chiung-Yu; Hwang, Yea-Shwu; Cherng, Rong-Ju
2012-09-01
Although the prevalence of reported discomfort by computer workers is high, the impact of high computer workload on musculoskeletal symptoms remains unclear. The purpose of this study was to investigate the prevalence of musculoskeletal symptoms for office workers with high computer workload. The association between risk factors and musculoskeletal symptoms was also assessed. Two questionnaires were posted on the Web sites of 3 companies and 1 university to recruit computer users in Tainan, Taiwan, during May to July 2009. The 12-item Chinese Health Questionnaire and Musculoskeletal Symptom Questionnaire were chosen as the evaluation tools for musculoskeletal symptoms and its associated risk factors. Chinese Health Questionnaire greater than 5 and computer usage greater than 7 h/d were used to as the cutoff line to divide groups. Descriptive statistics were computed for mean values and frequencies. χ(2) Analysis was used to determine significant differences between groups. A 0.05 level of significance of was used for statistical comparisons. A total of 254 subjects returned the questionnaire, of which 203 met the inclusion criteria. The 3 leading regions of musculoskeletal symptoms among the computer users were the shoulder (73%), neck (71%), and upper back (60%) areas. Similarly, the 3 leading regions of musculoskeletal symptoms among the computer users with high workload were shoulder (77.3%), neck (75.6%), and upper back (63.9%) regions. High psychologic distress was significantly associated with shoulder and upper back complaints (odds ratio [OR], 3.46; OR, 2.24), whereas a high workload was significantly associated with lower back complaints (OR, 1.89). Females were more likely to report shoulder complaints (OR, 2.25). This study found that high psychologic distress was significantly associated with shoulder and upper back pain, whereas high workload was associated with lower back pain. Women tended to have a greater risk of shoulder complaints than men. Developing an intervention that addresses both physical and psychologic problems is important for future studies. Copyright © 2012 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.
Improving User Access to the Integrated Multi-Satellite Retrievals for GPM (IMERG) Products
NASA Astrophysics Data System (ADS)
Huffman, George; Bolvin, David; Nelkin, Eric; Kidd, Christopher
2016-04-01
The U.S. Global Precipitation Measurement mission (GPM) team has developed the Integrated Multi-satellitE Retrievals for GPM (IMERG) algorithm to take advantage of the international constellation of precipitation-relevant satellites and the Global Precipitation Climatology Centre surface precipitation gauge analysis. The goal is to provide a long record of homogeneous, high-resolution quasi-global estimates of precipitation. While expert scientific researchers are major users of the IMERG products, it is clear that many other user communities and disciplines also desire access to the data for wide-ranging applications. Lessons learned during the Tropical Rainfall Measuring Mission, the predecessor to GPM, led to some basic design choices that provided the framework for supporting multiple user bases. For example, two near-real-time "runs" are computed, the Early and Late (currently 5 and 15 hours after observation time, respectively), then the Final Run about 3 months later. The datasets contain multiple fields that provide insight into the computation of the complete precipitation data field, as well as diagnostic (currently) estimates of the precipitation's phase. In parallel with this, the archive sites are working to provide the IMERG data in a variety of formats, and with subsetting and simple interactive analysis to make the data more easily available to non-expert users. The various options for accessing the data are summarized under the pmm.nasa.gov data access page. The talk will end by considering the feasibility of major user requests, including polar coverage, a simplified Data Quality Index, and reduced data latency for the Early Run. In brief, the first two are challenging, but under the team's control. The last requires significant action by some of the satellite data providers.
The Cloud Area Padovana: from pilot to production
NASA Astrophysics Data System (ADS)
Andreetto, P.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Sgaravatto, M.; Traldi, S.; Verlato, M.; Zangrando, L.
2017-10-01
The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs. The hardware resources have been scaled horizontally and vertically, by upgrading some hypervisors and by adding new ones: currently it provides about 1100 cores. Some in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for the INFN-AAI Identity Provider as a new option for the user authentication. In collaboration with the EU-funded Indigo DataCloud project, the integration with Docker-based containers has been experimented with and will be available in production soon. This computing facility now satisfies the computational and storage demands of more than 70 users affiliated with about 20 research projects. We present here the architecture of this Cloud infrastructure, the tools and procedures used to operate it. We also focus on the lessons learnt in these two years, describing the problems that were found and the corrective actions that had to be applied. We also discuss about the chosen strategy for upgrades, which combines the need to promptly integrate the OpenStack new developments, the demand to reduce the downtimes of the infrastructure, and the need to limit the effort requested for such updates. We also discuss how this Cloud infrastructure is being used. In particular we focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, based on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES, using a client-server system called TraceWin, exploits INFN’s virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.
Lessons from the MicroObservatory Net
NASA Astrophysics Data System (ADS)
Brecher, K.; Sadler, P.; Gould, R.; Leiker, S.; Antonucci, P.; Deutsch, F.
1998-12-01
Over the past several years, we have developed a fully integrated automated astronomical telescope system which combines the imaging power of a cooled CCD, with a self-contained and weatherized 15 cm reflecting optical telescope and mount. Each telescope can be pointed and focused remotely, and filters, field of view and exposure times can be changed easily. The MicroObservatory Net consists of five of these telescopes. They are being deployed around the world at widely distributed longitudes for access to distant night skies during local daytime. Remote access to the MicroObservatories over the Internet has been available to select schools since 1995. The telescopes can be controlled in real time or in delay mode, from any computer using Web-based software. Individuals have access to all of the telescope control functions without the need for an `on-site' operator. After a MicroObservatory completes a job, the user is automatically notified by e-mail that the image is available for viewing and downloading from the Web site. Images are archived at the Web site, along with sample challenges and a user bulletin board, all of which encourage collaboration between schools. The Internet address of the telescopes is http://mo-www.harvard.edu/MicroObservatory/. The telescopes were designed for classroom instruction by teachers, as well as for use by students and amateur astronomers for original scientific research projects. In this talk, we will review some of the experiences we, students and teachers have had in using the telescopes. Support for the MicroObservatory Net has been provided by the NSF, Apple Computer, Inc. and Kodak, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, T.
2000-07-01
The Write One, Run Many (WORM) site (worm.csirc.net) is the on-line home of the WORM language and is hosted by the Criticality Safety Information Resource Center (CSIRC) (www.csirc.net). The purpose of this web site is to create an on-line community for WORM users to gather, share, and archive WORM-related information. WORM is an embedded, functional, programming language designed to facilitate the creation of input decks for computer codes that take standard ASCII text files as input. A functional programming language is one that emphasizes the evaluation of expressions, rather than execution of commands. The simplest and perhaps most common examplemore » of a functional language is a spreadsheet such as Microsoft Excel. The spreadsheet user specifies expressions to be evaluated, while the spreadsheet itself determines the commands to execute, as well as the order of execution/evaluation. WORM functions in a similar fashion and, as a result, is very simple to use and easy to learn. WORM improves the efficiency of today's criticality safety analyst by allowing: (1) input decks for parameter studies to be created quickly and easily; (2) calculations and variables to be embedded into any input deck, thus allowing for meaningful parameter specifications; (3) problems to be specified using any combination of units; and (4) complex mathematically defined models to be created. WORM is completely written in Perl. Running on all variants of UNIX, Windows, MS-DOS, MacOS, and many other operating systems, Perl is one of the most portable programming languages available. As such, WORM works on practically any computer platform.« less
Automating slope monitoring in mines with terrestrial lidar scanners
NASA Astrophysics Data System (ADS)
Conforti, Dario
2014-05-01
Static terrestrial laser scanners (TLS) have been an important component of slope monitoring for some time, and many solutions for monitoring the progress of a slide have been devised over the years. However, all of these solutions have required users to operate the lidar equipment in the field, creating a high cost in time and resources, especially if the surveys must be performed very frequently. This paper presents a new solution for monitoring slides, developed using a TLS and an automated data acquisition, processing and analysis system. In this solution, a TLS is permanently mounted within sight of the target surface and connected to a control computer. The control software on the computer automatically triggers surveys according to a user-defined schedule, parses data into point clouds, and compares data against a baseline. The software can base the comparison against either the original survey of the site or the most recent survey, depending on whether the operator needs to measure the total or recent movement of the slide. If the displacement exceeds a user-defined safety threshold, the control computer transmits alerts via SMS text messaging and/or email, including graphs and tables describing the nature and size of the displacement. The solution can also be configured to trigger the external visual/audio alarm systems. If the survey areas contain high-traffic areas such as roads, the operator can mark them for exclusion in the comparison to prevent false alarms. To improve usability and safety, the control computer can connect to a local intranet and allow remote access through the software's web portal. This enables operators to perform most tasks with the TLS from their office, including reviewing displacement reports, downloading survey data, and adjusting the scan schedule. This solution has proved invaluable in automatically detecting and alerting users to potential danger within the monitored areas while lowering the cost and work required for monitoring. An explanation of the entire system and a post-acquisition data demonstration will be presented.
An Empirical Study of User Experience on Touch Mice
ERIC Educational Resources Information Center
Chou, Jyh Rong
2016-01-01
The touch mouse is a new type of computer mouse that provides users with a new way of touch-based environment to interact with computers. For more than a decade, user experience (UX) has grown into a core concept of human-computer interaction (HCI), describing a user's perceptions and responses that result from the use of a product in a particular…
The Gene Construction Kit: a new computer program for manipulating and presenting DNA constructs.
Gross, R H
1990-06-01
The Gene Construction Kit is a new tool for manipulating and displaying DNA sequence information. Constructs can be displayed either graphically or as formatted sequence. Segments of DNA can be cut out with restriction enzymes and pasted into other sites. The program keeps track of staggered ends and notifies the user of incompatibilities and offers a choice of ligation options. Each segment of a construct can have its own defined thickness, pattern, direction and color. The sequence listing can be displayed in any font and style in user defined grouping. Nucleotide positions can be displayed as can restriction sites and protein sequences. The DNA can be displayed as either single- or double-stranded. Restriction sites can be readily marked. Alternative views of the DNA can be maintained and the history of the construct automatically stored. Gel electrophoresis patterns can be generated and can be used in cloning project design. Extensive comments can be stored with the construct and can be searched rapidly for key words. High quality illustrations showing multiple editable constructs with added graphics and text information can be generated for slides, posters or publication.
Understanding health care communication preferences of veteran primary care users.
LaVela, Sherri L; Schectman, Gordon; Gering, Jeffrey; Locatelli, Sara M; Gawron, Andrew; Weaver, Frances M
2012-09-01
To assess veterans' health communication preferences (in-person, telephone, or electronic) for primary care needs and the impact of computer use on preferences. Structured patient interviews (n=448). Bivariate analyses examined preferences for primary care by 'infrequent' vs. 'regular' computer users. Only 54% were regular computer users, nearly all of whom had ever used the internet. 'Telephone' was preferred for 6 of 10 reasons (general medical questions, medication questions and refills, preventive care reminders, scheduling, and test results); although telephone was preferred by markedly fewer regular computer users. 'In-person' was preferred for new/ongoing conditions/symptoms, treatment instructions, and next care steps; these preferences were unaffected by computer use frequency. Among regular computer users, 1/3 preferred 'electronic' for preventive reminders (37%), test results (34%), and refills (32%). For most primary care needs, telephone communication was preferred, although by a greater proportion of infrequent vs. regular computer users. In-person communication was preferred for reasons that may require an exam or visual instructions. About 1/3 of regular computer users prefer electronic communication for routine needs, e.g., preventive reminders, test results, and refills. These findings can be used to plan patient-centered care that is aligned with veterans' preferred health communication methods. Published by Elsevier Ireland Ltd.
Organizational Strategies for End-User Computing Support.
ERIC Educational Resources Information Center
Blackmun, Robert R.; And Others
1988-01-01
Effective support for end users of computers has been an important issue in higher education from the first applications of general purpose mainframe computers through minicomputers, microcomputers, and supercomputers. The development of end user support is reviewed and organizational models are examined. (Author/MLW)
Information-seeking behavior changes in community-based teaching practices.
Byrnes, Jennifer A; Kulick, Tracy A; Schwartz, Diane G
2004-07-01
A National Library of Medicine information access grant allowed for a collaborative project to provide computer resources in fourteen clinical practice sites that enabled health care professionals to access medical information via PubMed and the Internet. Health care professionals were taught how to access quality, cost-effective information that was user friendly and would result in improved patient care. Selected sites were located in medically underserved areas and received a computer, a printer, and, during year one, a fax machine. Participants were provided dial-up Internet service or were connected to the affiliated hospital's network. Clinicians were trained in how to search PubMed as a tool for practicing evidence-based medicine and to support clinical decision making. Health care providers were also taught how to find patient-education materials and continuing education programs and how to network with other professionals. Prior to the training, participants completed a questionnaire to assess their computer skills and familiarity with searching the Internet, MEDLINE, and other health-related databases. Responses indicated favorable changes in information-seeking behavior, including an increased frequency in conducting MEDLINE searches and Internet searches for work-related information.
A security mechanism based on evolutionary game in fog computing.
Sun, Yan; Lin, Fuhong; Zhang, Nan
2018-02-01
Fog computing is a distributed computing paradigm at the edge of the network and requires cooperation of users and sharing of resources. When users in fog computing open their resources, their devices are easily intercepted and attacked because they are accessed through wireless network and present an extensive geographical distribution. In this study, a credible third party was introduced to supervise the behavior of users and protect the security of user cooperation. A fog computing security mechanism based on human nervous system is proposed, and the strategy for a stable system evolution is calculated. The MATLAB simulation results show that the proposed mechanism can reduce the number of attack behaviors effectively and stimulate users to cooperate in application tasks positively.
Cloud Computing for the Grid: GridControl: A Software Platform to Support the Smart Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
GENI Project: Cornell University is creating a new software platform for grid operators called GridControl that will utilize cloud computing to more efficiently control the grid. In a cloud computing system, there are minimal hardware and software demands on users. The user can tap into a network of computers that is housed elsewhere (the cloud) and the network runs computer applications for the user. The user only needs interface software to access all of the cloud’s data resources, which can be as simple as a web browser. Cloud computing can reduce costs, facilitate innovation through sharing, empower users, and improvemore » the overall reliability of a dispersed system. Cornell’s GridControl will focus on 4 elements: delivering the state of the grid to users quickly and reliably; building networked, scalable grid-control software; tailoring services to emerging smart grid uses; and simulating smart grid behavior under various conditions.« less
Nass, C; Lee, K M
2001-09-01
Would people exhibit similarity-attraction and consistency-attraction toward unambiguously computer-generated speech even when personality is clearly not relevant? In Experiment 1, participants (extrovert or introvert) heard a synthesized voice (extrovert or introvert) on a book-buying Web site. Participants accurately recognized personality cues in text to speech and showed similarity-attraction in their evaluation of the computer voice, the book reviews, and the reviewer. Experiment 2, in a Web auction context, added personality of the text to the previous design. The results replicated Experiment 1 and demonstrated consistency (voice and text personality)-attraction. To maximize liking and trust, designers should set parameters, for example, words per minute or frequency range, that create a personality that is consistent with the user and the content being presented.
ATLAS Distributed Computing Experience and Performance During the LHC Run-2
NASA Astrophysics Data System (ADS)
Filipčič, A.;
2017-10-01
ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the new model was demonstrated through the delivery of analysis datasets to users just one week after data taking, by completing the calibration loop, Tier-0 processing and train production steps promptly. The great flexibility of the new system also makes it possible to execute part of the Tier-0 processing on the grid when Tier-0 resources experience a backlog during high data-taking periods. The introduction of the data lifetime model, where each dataset is assigned a finite lifetime (with extensions possible for frequently accessed data), was made possible by Rucio. Thanks to this the storage crises experienced in Run-1 have not reappeared during Run-2. In addition, the distinction between Tier-1 and Tier-2 disk storage, now largely artificial given the quality of Tier-2 resources and their networking, has been removed through the introduction of dynamic ATLAS clouds that group the storage endpoint nucleus and its close-by execution satellite sites. All stable ATLAS sites are now able to store unique or primary copies of the datasets. ATLAS Distributed Computing is further evolving to speed up request processing by introducing network awareness, using machine learning and optimisation of the latencies during the execution of the full chain of tasks. The Event Service, a new workflow and job execution engine, is designed around check-pointing at the level of event processing to use opportunistic resources more efficiently. ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results. The exploitation of opportunistic resources was at an early stage throughout 2015, at the level of 10% of the total ATLAS computing power, but in the next few years it is expected to deliver much more. In addition, demonstrating the ability to use an opportunistic resource can lead to securing ATLAS allocations on the facility, hence the importance of this work goes beyond merely the initial CPU cycles gained. In this paper, we give an overview and compare the performance, development effort, flexibility and robustness of the various approaches.
2007-12-01
Boyle, “Important issues in hypertext documentation usability,” In Proceedings of the 9th Annual international Conference on Systems Documentation...Tufte’s principles of information design to creating effective Web sites.” In Proceedings of the 15th Annual international Conference on Computer...usability,” In Proceedings of the 9th Annual international Conference on Systems Documentation (Chicago, Illinois, 1991). SIGDOC . ACM, New York, NY
2007-01-01
Mechanical Turk: Artificial Artificial Intelligence . Retrieved May 15, 2006 from http://www.mturk.com/ mturk/welcome Atkins, D. E., Droegemeier, K. K...Turk (Amazon, 2006) site goes beyond volunteers and pays people to do Human Intelligence Tasks, those that are difficult for computers but relatively...geographically distributed scientific collaboration, and the use of videogame technology for training. Address: U.S. Army Research Institute, 2511 Jefferson
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris; Tang, Diane L; Hanrahan, Patrick
2014-04-29
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA
2011-02-01
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA
2012-03-20
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
BUMPER v1.0: a Bayesian user-friendly model for palaeo-environmental reconstruction
NASA Astrophysics Data System (ADS)
Holden, Philip B.; Birks, H. John B.; Brooks, Stephen J.; Bush, Mark B.; Hwang, Grace M.; Matthews-Bird, Frazer; Valencia, Bryan G.; van Woesik, Robert
2017-02-01
We describe the Bayesian user-friendly model for palaeo-environmental reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring ˜ 2 s to build a 100-taxon model from a 100-site training set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training sets under ideal assumptions. We then use these to demonstrate the sensitivity of reconstructions to the characteristics of the training set, considering assemblage richness, taxon tolerances, and the number of training sites. We find that a useful guideline for the size of a training set is to provide, on average, at least 10 samples of each taxon. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. An identically configured model is used in each application, the only change being the input files that provide the training-set environment and taxon-count data. The performance of BUMPER is shown to be comparable with weighted average partial least squares (WAPLS) in each case. Additional artificial datasets are constructed with similar characteristics to the real data, and these are used to explore the reasons for the differing performances of the different training sets.
Remedial Action Assessment System: A computer-based methodology for conducting feasibility studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, M.K.; Buelt, J.L.; Stottlemyre, J.A.
1991-02-01
Because of the complexity and number of potential waste sites facing the US Department of Energy (DOE) for potential cleanup, DOE is supporting the development of a computer-based methodology to streamline the remedial investigation/feasibility study process. The Remedial Action Assessment System (RAAS), can be used for screening, linking, and evaluating established technology processes in support of conducting feasibility studies. It is also intended to do the same in support of corrective measures studies. The user interface employs menus, windows, help features, and graphical information while RAAS is in operation. Object-oriented programming is used to link unit processes into sets ofmore » compatible processes that form appropriate remedial alternatives. Once the remedial alternatives are formed, the RAAS methodology can evaluate them in terms of effectiveness, implementability, and cost. RAAS will access a user-selected risk assessment code to determine the reduction of risk after remedial action by each recommended alternative. The methodology will also help determine the implementability of the remedial alternatives at a site and access cost estimating tools to provide estimates of capital, operating, and maintenance costs. This paper presents the characteristics of two RAAS prototypes currently being developed. These include the RAAS Technology Information System, which accesses graphical, tabular and textual information about technologies, and the main RAAS methodology, which screens, links, and evaluates remedial technologies. 4 refs., 3 figs., 1 tab.« less
ERIC Educational Resources Information Center
Edwards, Keith
2015-01-01
Attacks on computer systems continue to be a problem. The majority of the attacks target home computer users. To help mitigate the attacks some companies provide security awareness training to their employees. However, not all people work for a company that provides security awareness training and typically, home computer users do not have the…
Designing User-Computer Dialogues: Basic Principles and Guidelines.
ERIC Educational Resources Information Center
Harrell, Thomas H.
This discussion of the design of computerized psychological assessment or testing instruments stresses the importance of the well-designed computer-user interface. The principles underlying the three main functional elements of computer-user dialogue--data entry, data display, and sequential control--are discussed, and basic guidelines derived…
Development of a site analysis tool for distributed wind projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Shawn
The Cadmus Group, Inc., in collaboration with the National Renewable Energy Laboratory (NREL) and Encraft, was awarded a grant from the Department of Energy (DOE) to develop a site analysis tool for distributed wind technologies. As the principal investigator for this project, Mr. Shawn Shaw was responsible for overall project management, direction, and technical approach. The product resulting from this project is the Distributed Wind Site Analysis Tool (DSAT), a software tool for analyzing proposed sites for distributed wind technology (DWT) systems. This user-friendly tool supports the long-term growth and stability of the DWT market by providing reliable, realistic estimatesmore » of site and system energy output and feasibility. DSAT-which is accessible online and requires no purchase or download of software-is available in two account types; Standard: This free account allows the user to analyze a limited number of sites and to produce a system performance report for each; and Professional: For a small annual fee users can analyze an unlimited number of sites, produce system performance reports, and generate other customizable reports containing key information such as visual influence and wind resources. The tool’s interactive maps allow users to create site models that incorporate the obstructions and terrain types present. Users can generate site reports immediately after entering the requisite site information. Ideally, this tool also educates users regarding good site selection and effective evaluation practices.« less
National Fusion Collaboratory: Grid Computing for Simulations and Experiments
NASA Astrophysics Data System (ADS)
Greenwald, Martin
2004-05-01
The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.
Evaluation of personal digital assistant drug information databases for the managed care pharmacist.
Lowry, Colleen M; Kostka-Rokosz, Maria D; McCloskey, William W
2003-01-01
Personal digital assistants (PDAs) are becoming a necessity for practicing pharmacists. They offer a time-saving and convenient way to obtain current drug information. Several software companies now offer general drug information databases for use on hand held computers. PDAs priced less than 200 US dollars often have limited memory capacity; therefore, the user must choose from a growing list of general drug information database options in order to maximize utility without exceeding memory capacity. This paper reviews the attributes of available general drug information software databases for the PDA. It provides information on the content, advantages, limitations, pricing, memory requirements, and accessibility of drug information software databases. Ten drug information databases were subjectively analyzed and evaluated based on information from the product.s Web site, vendor Web sites, and from our experience. Some of these databases have attractive auxiliary features such as kinetics calculators, disease references, drug-drug and drug-herb interaction tools, and clinical guidelines, which may make them more useful to the PDA user. Not all drug information databases are equal with regard to content, author credentials, frequency of updates, and memory requirements. The user must therefore evaluate databases for completeness, currency, and cost effectiveness before purchase. In addition, consideration should be given to the ease of use and flexibility of individual programs.
Seismic design parameters - A user guide
Leyendecker, E.V.; Frankel, A.D.; Rukstales, K.S.
2001-01-01
The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings (1997 NEHRP Provisions) introduced seismic design procedure that is based on the explicit use of spectral response acceleration rather than the traditional peak ground acceleration and/or peak ground velocity or zone factors. The spectral response accelerations are obtained from spectral response acceleration maps accompanying the report. Maps are available for the United States and a number of U.S. territories. Since 1997 additional codes and standards have also adopted seismic design approaches based on the same procedure used in the NEHRP Provisions and the accompanying maps. The design documents using the 1997 NEHRP Provisions procedure may be divided into three categories -(1) Design of New Construction, (2) Design and Evaluation of Existing Construction, and (3) Design of Residential Construction. A CD-ROM has been prepared for use in conjunction with the design documents in each of these three categories. The spectral accelerations obtained using the software on the CD are the same as those that would be obtained by using the maps accompanying the design documents. The software has been prepared to operate on a personal computer using a Windows (Microsoft Corporation) operating environment and a point and click type of interface. The user can obtain the spectral acceleration values that would be obtained by use of the maps accompanying the design documents, include site factors appropriate for the Site Class provided by the user, calculate a response spectrum that includes the site factor, and plot a response spectrum. Sites may be located by providing the latitude-longitude or zip code for all areas covered by the maps. All of the maps used in the various documents are also included on the CDROM
Zhao, Yu; Liu, Yide; Lai, Ivan K W; Zhang, Hongfeng; Zhang, Yi
2016-03-18
As one of the latest revolutions in networking technology, social networks allow users to keep connected and exchange information. Driven by the rapid wireless technology development and diffusion of mobile devices, social networks experienced a tremendous change based on mobile sensor computing. More and more mobile sensor network applications have appeared with the emergence of a huge amount of users. Therefore, an in-depth discussion on the human-computer interaction (HCI) issues of mobile sensor computing is required. The target of this study is to extend the discussions on HCI by examining the relationships of users' compound attitudes (i.e., affective attitudes, cognitive attitude), engagement and electronic word of mouth (eWOM) behaviors in the context of mobile sensor computing. A conceptual model is developed, based on which, 313 valid questionnaires are collected. The research discusses the level of impact on the eWOM of mobile sensor computing by considering user-technology issues, including the compound attitude and engagement, which can bring valuable discussions on the HCI of mobile sensor computing in further study. Besides, we find that user engagement plays a mediating role between the user's compound attitudes and eWOM. The research result can also help the mobile sensor computing industry to develop effective strategies and build strong consumer user-product (brand) relationships.
Unaffiliated Users' Access to Academic Libraries: A Survey.
ERIC Educational Resources Information Center
Courtney, Nancy
2003-01-01
Most of 814 academic libraries surveyed allow onsite access to unaffiliated users, and many give borrowing privileges to certain categories of users. Use of library computers to access library resources and other computer applications is commonly allowed although authentication on library computers is increasing. Five tables show statistics.…
A User Assessment of Workspaces in Selected Music Education Computer Laboratories.
ERIC Educational Resources Information Center
Badolato, Michael Jeremy
A study of 120 students selected from the user populations of four music education computer laboratories was conducted to determine the applicability of current ergonomic and environmental design guidelines in satisfying the needs of users of educational computing workspaces. Eleven categories of workspace factors were organized into a…
Managing End User Computing in the Federal Government.
ERIC Educational Resources Information Center
General Services Administration, Washington, DC.
This report presents an initial approach developed by the General Services Administration for the management of end user computing in federal government agencies. Defined as technology used directly by individuals in need of information products, end user computing represents a new field encompassing such technologies as word processing, personal…
System and method for controlling power consumption in a computer system based on user satisfaction
Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok
2014-04-22
Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.
An Object-Oriented Graphical User Interface for a Reusable Rocket Engine Intelligent Control System
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Musgrave, Jeffrey L.; Guo, Ten-Huei; Paxson, Daniel E.; Wong, Edmond; Saus, Joseph R.; Merrill, Walter C.
1994-01-01
An intelligent control system for reusable rocket engines under development at NASA Lewis Research Center requires a graphical user interface to allow observation of the closed-loop system in operation. The simulation testbed consists of a real-time engine simulation computer, a controls computer, and several auxiliary computers for diagnostics and coordination. The system is set up so that the simulation computer could be replaced by the real engine and the change would be transparent to the control system. Because of the hard real-time requirement of the control computer, putting a graphical user interface on it was not an option. Thus, a separate computer used strictly for the graphical user interface was warranted. An object-oriented LISP-based graphical user interface has been developed on a Texas Instruments Explorer 2+ to indicate the condition of the engine to the observer through plots, animation, interactive graphics, and text.
An assessment of future computer system needs for large-scale computation
NASA Technical Reports Server (NTRS)
Lykos, P.; White, J.
1980-01-01
Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.
Identity-Based Authentication for Cloud Computing
NASA Astrophysics Data System (ADS)
Li, Hongwei; Dai, Yuanshun; Tian, Ling; Yang, Haomiao
Cloud computing is a recently developed new technology for complex systems with massive-scale services sharing among numerous users. Therefore, authentication of both users and services is a significant issue for the trust and security of the cloud computing. SSL Authentication Protocol (SAP), once applied in cloud computing, will become so complicated that users will undergo a heavily loaded point both in computation and communication. This paper, based on the identity-based hierarchical model for cloud computing (IBHMCC) and its corresponding encryption and signature schemes, presented a new identity-based authentication protocol for cloud computing and services. Through simulation testing, it is shown that the authentication protocol is more lightweight and efficient than SAP, specially the more lightweight user side. Such merit of our model with great scalability is very suited to the massive-scale cloud.
Facilitating Navigation Through Large Archives
NASA Technical Reports Server (NTRS)
Shelton, Robert O.; Smith, Stephanie L.; Troung, Dat; Hodgson, Terry R.
2005-01-01
Automated Visual Access (AVA) is a computer program that effectively makes a large collection of information visible in a manner that enables a user to quickly and efficiently locate information resources, with minimal need for conventional keyword searches and perusal of complex hierarchical directory systems. AVA includes three key components: (1) a taxonomy that comprises a collection of words and phrases, clustered according to meaning, that are used to classify information resources; (2) a statistical indexing and scoring engine; and (3) a component that generates a graphical user interface that uses the scoring data to generate a visual map of resources and topics. The top level of an AVA display is a pictorial representation of an information archive. The user enters the depicted archive by either clicking on a depiction of subject area cluster, selecting a topic from a list, or entering a query into a text box. The resulting display enables the user to view candidate information entities at various levels of detail. Resources are grouped spatially by topic with greatest generality at the top layer and increasing detail with depth. The user can zoom in or out of specific sites or into greater or lesser content detail.
Two-Cloud-Servers-Assisted Secure Outsourcing Multiparty Computation
Wen, Qiaoyan; Zhang, Hua; Jin, Zhengping; Li, Wenmin
2014-01-01
We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function. PMID:24982949
Two-cloud-servers-assisted secure outsourcing multiparty computation.
Sun, Yi; Wen, Qiaoyan; Zhang, Yudong; Zhang, Hua; Jin, Zhengping; Li, Wenmin
2014-01-01
We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.
Fischer, Steve; Aurrecoechea, Cristina; Brunk, Brian P.; Gao, Xin; Harb, Omar S.; Kraemer, Eileen T.; Pennington, Cary; Treatman, Charles; Kissinger, Jessica C.; Roos, David S.; Stoeckert, Christian J.
2011-01-01
Web sites associated with the Eukaryotic Pathogen Bioinformatics Resource Center (EuPathDB.org) have recently introduced a graphical user interface, the Strategies WDK, intended to make advanced searching and set and interval operations easy and accessible to all users. With a design guided by usability studies, the system helps motivate researchers to perform dynamic computational experiments and explore relationships across data sets. For example, PlasmoDB users seeking novel therapeutic targets may wish to locate putative enzymes that distinguish pathogens from their hosts, and that are expressed during appropriate developmental stages. When a researcher runs one of the approximately 100 searches available on the site, the search is presented as a first step in a strategy. The strategy is extended by running additional searches, which are combined with set operators (union, intersect or minus), or genomic interval operators (overlap, contains). A graphical display uses Venn diagrams to make the strategy’s flow obvious. The interface facilitates interactive adjustment of the component searches with changes propagating forward through the strategy. Users may save their strategies, creating protocols that can be shared with colleagues. The strategy system has now been deployed on all EuPathDB databases, and successfully deployed by other projects. The Strategies WDK uses a configurable MVC architecture that is compatible with most genomics and biological warehouse databases, and is available for download at code.google.com/p/strategies-wdk. Database URL: www.eupathdb.org PMID:21705364
Distributed user interfaces for clinical ubiquitous computing applications.
Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik
2005-08-01
Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.
NASA Technical Reports Server (NTRS)
Grantham, C.
1979-01-01
The Interactive Software Invocation (ISIS), an interactive data management system, was developed to act as a buffer between the user and host computer system. The user is provided by ISIS with a powerful system for developing software or systems in the interactive environment. The user is protected from the idiosyncracies of the host computer system by providing such a complete range of capabilities that the user should have no need for direct access to the host computer. These capabilities are divided into four areas: desk top calculator, data editor, file manager, and tool invoker.
Users guide for guidance and control Launch and Abort Simulation for Spacecraft (LASS), volume 1
NASA Technical Reports Server (NTRS)
Havig, T. F.; Backman, H. D.
1972-01-01
The mathematical models and computer program which are used to implement LASS are described. The computer program provides for a simulation of boost to orbit and abort capability from boost trajectories to a prescribed target. The abort target provides a decision point for engine shutdown from which the vehicle coasts to the vicinity of the selected abort recovery site. The simulation is a six degree of freedom simulation describing a rigid body. The vehicle is influenced by forces and moments from nondistributed aerodynamics. An adaptive autopilot is provided to control vehicle attitudes during powered and unpowered flight. A conventional autopilot is provided for study of vehicle during powered flight.
A SCILAB Program for Computing Rotating Magnetic Compact Objects
NASA Astrophysics Data System (ADS)
Papasotiriou, P. J.; Geroyannis, V. S.
We implement the so-called ``complex-plane iterative technique'' (CIT) to the computation of classical differentially rotating magnetic white dwarf and neutron star models. The program has been written in SCILAB (© INRIA-ENPC), a matrix-oriented high-level programming language, which can be downloaded free of charge from the site http://www-rocq.inria.fr/scilab. Due to the advanced capabilities of this language, the code is short and understandable. Highlights of the program are: (a) time-saving character, (b) easy use due to the built-in graphics user interface, (c) easy interfacing with Fortran via online dynamic link. We interpret our numerical results in various ways by extensively using the graphics environment of SCILAB.
All new custom path photo book creation
NASA Astrophysics Data System (ADS)
Wang, Wiley; Muzzolini, Russ
2012-03-01
In this paper, we present an all new custom path to allow consumers to have full control to their photos and the format of their books, while providing them with guidance to make their creation fast and easy. The users can choose to fully automate the initial creation, and then customize every page. The system manage many design themes along with numerous design elements, such as layouts, backgrounds, embellishments and pattern bands. The users can also utilize photos from multiple sources including their computers, Shutterfly accounts, Shutterfly Share sites and Facebook. The users can also use a photo as background, add, move and resize photos and text - putting what they want where they want instead of being confined to templates. The new path allows users to add embellishments anywhere in the book, and the high-performance platform can support up to 1,000 photos per book and up to 25 pictures per page. The path offers either Smart Autofill or Storyboard features allowing customers to populate their books with photos so they can add captions and customize the pages.
Networked differential GPS system
NASA Technical Reports Server (NTRS)
Sheynblat, Leonid (Inventor); Kalafus, Rudolph M. (Inventor); Loomis, Peter V. W. (Inventor); Mueller, K. Tysen (Inventor)
1994-01-01
An embodiment of the present invention relates to a worldwide network of differential GPS reference stations (NDGPS) that continually track the entire GPS satellite constellation and provide interpolations of reference station corrections tailored for particular user locations between the reference stations Each reference station takes real-time ionospheric measurements with codeless cross-correlating dual-frequency carrier GPS receivers and computes real-time orbit ephemerides independently. An absolute pseudorange correction (PRC) is defined for each satellite as a function of a particular user's location. A map of the function is constructed, with iso-PRC contours. The network measures the PRCs at a few points, so-called reference stations and constructs an iso-PRC map for each satellite. Corrections are interpolated for each user's site on a subscription basis. The data bandwidths are kept to a minimum by transmitting information that cannot be obtained directly by the user and by updating information by classes and according to how quickly each class of data goes stale given the realities of the GPS system. Sub-decimeter-level kinematic accuracy over a given area is accomplished by establishing a mini-fiducial network.
Asymmetric GT of social networks
NASA Astrophysics Data System (ADS)
Szu, Harold
2010-04-01
Web citation indexes are computed according to a data vector X collected from the frequency of user accesses, citations weighted by other sites' popularities, and modified by the financial sponsorship in a proprietary manner. The indexing determining the information to be retrieved by the public should be made responsible transparently in at least two ways. One shall balance the inbound linkages pointed at the specific i-th site called the popularity (see paper for equation) with the outbound linkages (see paper for equation) called the risk factor before the release of new information as environmental impact analysis. The relationship between these two factors cannot be assumed equivalent (undirected) as in the case of many mainstream Graph Theory (GT) models.
Predicting lysine glycation sites using bi-profile bayes feature extraction.
Ju, Zhe; Sun, Juhe; Li, Yanjie; Wang, Li
2017-12-01
Glycation is a nonenzymatic post-translational modification which has been found to be involved in various biological processes and closely associated with many metabolic diseases. The accurate identification of glycation sites is important to understand the underlying molecular mechanisms of glycation. As the traditional experimental methods are often labor-intensive and time-consuming, it is desired to develop computational methods to predict glycation sites. In this study, a novel predictor named BPB_GlySite is proposed to predict lysine glycation sites by using bi-profile bayes feature extraction and support vector machine algorithm. As illustrated by 10-fold cross-validation, BPB_GlySite achieves a satisfactory performance with a Sensitivity of 63.68%, a Specificity of 72.60%, an Accuracy of 69.63% and a Matthew's correlation coefficient of 0.3499. Experimental results also indicate that BPB_GlySite significantly outperforms three existing glycation sites predictors: NetGlycate, PreGly and Gly-PseAAC. Therefore, BPB_GlySite can be a useful bioinformatics tool for the prediction of glycation sites. A user-friendly web-server for BPB_GlySite is established at 123.206.31.171/BPB_GlySite/. Copyright © 2017 Elsevier Ltd. All rights reserved.
Web-based pathology practice examination usage.
Klatt, Edward C
2014-01-01
General and subject specific practice examinations for students in health sciences studying pathology were placed onto a free public internet web site entitled web path and were accessed four clicks from the home web site menu. Multiple choice questions were coded into. html files with JavaScript functions for web browser viewing in a timed format. A Perl programming language script with common gateway interface for web page forms scored examinations and placed results into a log file on an internet computer server. The four general review examinations of 30 questions each could be completed in up to 30 min. The 17 subject specific examinations of 10 questions each with accompanying images could be completed in up to 15 min each. The results of scores and user educational field of study from log files were compiled from June 2006 to January 2014. The four general review examinations had 31,639 accesses with completion of all questions, for a completion rate of 54% and average score of 75%. A score of 100% was achieved by 7% of users, ≥90% by 21%, and ≥50% score by 95% of users. In top to bottom web page menu order, review examination usage was 44%, 24%, 17%, and 15% of all accessions. The 17 subject specific examinations had 103,028 completions, with completion rate 73% and average score 74%. Scoring at 100% was 20% overall, ≥90% by 37%, and ≥50% score by 90% of users. The first three menu items on the web page accounted for 12.6%, 10.0%, and 8.2% of all completions, and the bottom three accounted for no more than 2.2% each. Completion rates were higher for shorter 10 questions subject examinations. Users identifying themselves as MD/DO scored higher than other users, averaging 75%. Usage was higher for examinations at the top of the web page menu. Scores achieved suggest that a cohort of serious users fully completing the examinations had sufficient preparation to use them to support their pathology education.
Icon and user interface design for emergency medical information systems: a case study.
Salman, Y Batu; Cheng, Hong-In; Patterson, Patrick E
2012-01-01
A usable medical information system should allow for reliable and accurate interaction between users and the system in emergencies. A participatory design approach was used to develop a medical information system in two Turkish hospitals. The process consisted of task and user analysis, an icon design survey, initial icon design, final icon design and evaluation, and installation of the iconic medical information system with the icons. We observed work sites to note working processes and tasks related to the information system and interviewed medical personnel. Emergency personnel then participated in the design process to develop a usable graphical user interface, by drawing icon sketches for 23 selected tasks. Similar sketches were requested for specific tasks such as family medical history, contact information, translation, addiction, required inspections, requests and applications, and nurse observations. The sketches were analyzed and redesigned into computer icons by professional designers and the research team. A second group of physicians and nurses then tested the understandability of the icons. The user interface layout was examined and evaluated by system users, followed by the system's installation. Medical personnel reported the participatory design process was interesting and believed the resulting designs would be more familiar and friendlier. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Computational Science News | Computational Science | NREL
-Cooled High-Performance Computing Technology at the ESIF February 28, 2018 NREL Launches New Website for High-Performance Computing System Users The National Renewable Energy Laboratory (NREL) Computational Science Center has launched a revamped website for users of the lab's high-performance computing (HPC
An Implemented Strategy for Campus Connectivity and Cooperative Computing.
ERIC Educational Resources Information Center
Halaris, Antony S.; Sloan, Lynda W.
1989-01-01
ConnectPac, a software package developed at Iona College to allow a computer user to access all services from a single personal computer, is described. ConnectPac uses mainframe computing to support a campus computing network, integrating personal and centralized computing into a menu-driven user environment. (Author/MLW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Joshua S.; Rautman, Christopher Arthur
2004-02-01
The geologic model implicit in the original site characterization report for the Bayou Choctaw Strategic Petroleum Reserve Site near Baton Rouge, Louisiana, has been converted to a numerical, computer-based three-dimensional model. The original site characterization model was successfully converted with minimal modifications and use of new information. The geometries of the salt diapir, selected adjacent sedimentary horizons, and a number of faults have been modeled. Models of a partial set of the several storage caverns that have been solution-mined within the salt mass are also included. Collectively, the converted model appears to be a relatively realistic representation of the geologymore » of the Bayou Choctaw site as known from existing data. A small number of geometric inconsistencies and other problems inherent in 2-D vs. 3-D modeling have been noted. Most of the major inconsistencies involve faults inferred from drill hole data only. Modem computer software allows visualization of the resulting site model and its component submodels with a degree of detail and flexibility that was not possible with conventional, two-dimensional and paper-based geologic maps and cross sections. The enhanced visualizations may be of particular value in conveying geologic concepts involved in the Bayou Choctaw Strategic Petroleum Reserve site to a lay audience. A Microsoft WindowsTM PC-based viewer and user-manipulable model files illustrating selected features of the converted model are included in this report.« less
Unidata's Vision for Providing Comprehensive and End-to-end Data Services
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.
2009-05-01
This paper presents Unidata's vision for providing comprehensive, well-integrated, and end-to-end data services for the geosciences. These include an array of functions for collecting, finding, and accessing data; data management tools for generating, cataloging, and exchanging metadata; and submitting or publishing, sharing, analyzing, visualizing, and integrating data. When this vision is realized, users no matter where they are or how they are connected to the Internetwill be able to find and access a plethora of geosciences data and use Unidata-provided tools and services both productively and creatively in their research and education. What that vision means for the Unidata community is elucidated by drawing a simple analogy. Most of users are familiar with Amazon and eBay e-commerce sites and content sharing sites like YouTube and Flickr. On the eBay marketplace, people can sell practically anything at any time and buyers can share their experience of purchasing a product or the reputation of a seller. Likewise, at Amazon, thousands of merchants sell their goods and millions of customers not only buy those goods, but provide a review or opinion of the products they buy and share their experiences as purchasers. Similarly, YouTube and Flickr are sites tailored to video- and photo-sharing, respectively, where users can upload their own content and share it with millions of other users, including family and friends. What all these sites, together with social-networking applications like MySpace and Facebook, have enabled is a sense of a virtual community in which users can search and browse products or content, comment and rate those products from anywhere, at any time, and via any Internet- enabled device like an iPhone, laptop, or a desktop computer. In essence, these enterprises have fundamentally altered people's buying modes and behavior toward purchases. Unidata believes that similar approaches, appropriately tailored to meet the needs of the scientific community, can be adopted to provide and share geosciences data and actively collaborate in the future. For example, future case-study data access systems, in addition to providing datasets and tools, will provide services that allow users to provide commentaries on a weather event, say a hurricane, as well as provide feedback on the quality, usefulness, and interpretation of the datasets through integrated blogs, forums, and Wikis, along with uploading and sharing products they derive, ancillary materials that users might have gathered (such as photos and videos from the storm), and publications and curricular materials they develop, all through a single data portal. In essence, such case study collections will be "living" or dynamic, allowing users to be also contributors as they add value to and grow existing case study collections.
Comparison Analysis among Large Amount of SNS Sites
NASA Astrophysics Data System (ADS)
Toriumi, Fujio; Yamamoto, Hitoshi; Suwa, Hirohiko; Okada, Isamu; Izumi, Kiyoshi; Hashimoto, Yasuhiro
In recent years, application of Social Networking Services (SNS) and Blogs are growing as new communication tools on the Internet. Several large-scale SNS sites are prospering; meanwhile, many sites with relatively small scale are offering services. Such small-scale SNSs realize small-group isolated type of communication while neither mixi nor MySpace can do that. However, the studies on SNS are almost about particular large-scale SNSs and cannot analyze whether their results apply for general features or for special characteristics on the SNSs. From the point of view of comparison analysis on SNS, comparison with just several types of those cannot reach a statistically significant level. We analyze many SNS sites with the aim of classifying them by using some approaches. Our paper classifies 50,000 sites for small-scale SNSs and gives their features from the points of network structure, patterns of communication, and growth rate of SNS. The result of analysis for network structure shows that many SNS sites have small-world attribute with short path lengths and high coefficients of their cluster. Distribution of degrees of the SNS sites is close to power law. This result indicates the small-scale SNS sites raise the percentage of users with many friends than mixi. According to the analysis of their coefficients of assortativity, those SNS sites have negative values of assortativity, and that means users with high degree tend to connect users with small degree. Next, we analyze the patterns of user communication. A friend network of SNS is explicit while users' communication behaviors are defined as an implicit network. What kind of relationships do these networks have? To address this question, we obtain some characteristics of users' communication structure and activation patterns of users on the SNS sites. By using new indexes, friend aggregation rate and friend coverage rate, we show that SNS sites with high value of friend coverage rate activate diary postings and their comments. Besides, they become activated when hub users with high degree do not behave actively on the sites with high value of friend aggregation rate and high value of friend coverage rate. On the other hand, activation emerges when hub users behave actively on the sites with low value of friend aggregation rate and high value of friend coverage rate. Finally, we observe SNS sites which are increasing the number of users considerably, from the viewpoint of network structure, and extract characteristics of high growth SNS sites. As a result of discrimination on the basis of the decision tree analysis, we can recognize the high growth SNS sites with a high degree of accuracy. Besides, this approach suggests mixi and the other small-scale SNS sites have different character trait.
ERIC Educational Resources Information Center
Association Supporting Computer Users in Education, 2017
2017-01-01
The Association Supporting Computer Users in Education (ASCUE) is a group of people interested in small college computing issues. It is a blend of people from all over the country who use computers in their teaching, academic support, and administrative support functions. Begun in 1968 as the College and University Eleven-Thirty Users' Group…
MetalPDB in 2018: a database of metal sites in biological macromolecular structures.
Putignano, Valeria; Rosato, Antonio; Banci, Lucia; Andreini, Claudia
2018-01-04
MetalPDB (http://metalweb.cerm.unifi.it/) is a database providing information on metal-binding sites detected in the three-dimensional (3D) structures of biological macromolecules. MetalPDB represents such sites as 3D templates, called Minimal Functional Sites (MFSs), which describe the local environment around the metal(s) independently of the larger context of the macromolecular structure. The 2018 update of MetalPDB includes new contents and tools. A major extension is the inclusion of proteins whose structures do not contain metal ions although their sequences potentially contain a known MFS. In addition, MetalPDB now provides extensive statistical analyses addressing several aspects of general metal usage within the PDB, across protein families and in catalysis. Users can also query MetalPDB to extract statistical information on structural aspects associated with individual metals, such as preferred coordination geometries or aminoacidic environment. A further major improvement is the functional annotation of MFSs; the annotation is manually performed via a password-protected annotator interface. At present, ∼50% of all MFSs have such a functional annotation. Other noteworthy improvements are bulk query functionality, through the upload of a list of PDB identifiers, and ftp access to MetalPDB contents, allowing users to carry out in-depth analyses on their own computational infrastructure. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Testing the Effectiveness of Interactive Multimedia for Library-User Education
ERIC Educational Resources Information Center
Markey, Karen; Armstrong, Annie; De Groote, Sandy; Fosmire, Michael; Fuderer, Laura; Garrett, Kelly; Georgas, Helen; Sharp, Linda; Smith, Cheri; Spaly, Michael; Warner, Joni E.
2005-01-01
A test of the effectiveness of interactive multimedia Web sites demonstrates that library users' topic knowledge was significantly greater after visiting the sites than before. Library users want more such sites about library services, their majors, and campus life generally. Librarians describe the roles they want to play on multimedia production…
1980-03-01
Oceanography Center (FNOC) is currently testing and evaluating a computerized flight plan system, referred to, for short, as OPARS. This sytem , developed to...replace the Lockheed Jetplan flight plan sytem , provides users at remote sites with direct access to the FNOC computer via 11 telephone lines. The...validity, but only for format. For example, an entry of ABCE , as the four- letter identification code for the destination airfield, would be accepted
Eye-gaze and intent: Application in 3D interface control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Goldberg, J.H.
1993-06-01
Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less
Eye-gaze and intent: Application in 3D interface control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Goldberg, J.H.
1993-01-01
Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.
2008-01-24
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year (FY) dating back to 1998. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period October 1 - December 31, 2007, for the fixed sites and the mobile site. The AMF has been deployed to Germany and this was the final operational quarter. The first quarter comprises a total of 2,208 hours. Although the average exceeded our goal this quarter, a series of severe weather events (i.e., widespread ice storms) disrupted utility services, which affected the SGP performance measures. Some instruments were covered in ice and power and data communication lines were down for more than 10 days in some areas of Oklahoma and Kansas, which resulted in lost data at the SGP site. The Site Access Request System is a web-based database used to track visitors to the fixed sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP site has a central facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. The TWP locale has the Manus, Nauru, and Darwin sites. The AMF completed its mission at the end of this quarter in Haselback, Germany (FKB designation). NIM represents the AMF statistics for the Niamey, Niger, Africa, past deployment in 2006. PYE represents just the AMF Archive statistics for the Point Reyes, California, past deployment in 2005. In addition, users who do not want to wait for data to be provided through the ACRF Archive can request an account on the local site data system. The eight research computers are located at the Barrow and Atqasuk sites; the SGP central facility; the TWP Manus, Nauru, and Darwin sites; the DMF at PNNL; and the AMF, currently in Germany. In addition, the ACRF serves as a data repository for a long-term Arctic atmospheric observatory in Eureka, Canada (80 degrees 05 minutes N, 86 degrees 43 minutes W) as part of the multiagency Study of Environmental Arctic Change (SEARCH) Program. NOAA began providing instruments for the site in 2005, and currently cloud radar data are available. The intent of the site is to monitor the important components of the Arctic atmosphere, including clouds, aerosols, atmospheric radiation, and local-scale atmospheric dynamics. Due to the similarity of ACRF NSA data streams, and the important synergy that can be formed between a network of Arctic atmospheric observations, much of the SEARCH observatory data are archived in the ARM archive. Instruments will be added to the site over time. For more information, please visit http://www.db.arm.gov/data. The designation for the archived Eureka data is YEU and is now included in the ACRF user metrics. This quarterly report provides the cumulative numbers of visitors and user accounts by site for the period January 1, 2007 - December 31, 2007. Table 2 shows the summary of cumulative users for the period January 1, 2007 - December 31, 2007. For the first quarter of FY 2008, the overall number of users was up significantly from the last reporting period. For the fourth consecutive reporting period, a record high number of Archive users was recorded. In addition, the number of visitors and visitor days set a new record this reporting period particularly due to the large number of field campaign activities in conjunction with the AMF deployment in Germany. It is interesting to note this quarter that 22% (a slight decrease from last quarter) of the Archive users are ARM Science funded principal investigators and 35% (the same as last quarter) of all other facility users are either ARM Science-funded principal investigators or ACRF infrastructure personnel. For reporting purposes, the three ACRF sites and the AMF operate 24 hours per day, 7 days per week, and 52 weeks per year. Time is reported in days instead of hours. If any lost work time is incurred by any employee, it is counted as a workday loss. Table 3 reports the consecutive days since the last recordable or reportable injury or incident causing damage to property, equipment, or vehicle for the period October 1 - December 31, 2007. There were no incidents this reporting period.« less
The DFVLR main department for central data processing, 1976 - 1983
NASA Technical Reports Server (NTRS)
1983-01-01
Data processing, equipment and systems operation, operative and user systems, user services, computer networks and communications, text processing, computer graphics, and high power computers are discussed.
User Centered System Design: Papers for the CHI '83 Conference on Human Factors in Computer Systems.
ERIC Educational Resources Information Center
California Univ., San Diego. Center for Human Information Processing.
Four papers from the University of California at San Diego (UCSD) Project on Human-Computer Interfaces are presented in this report. "Evaluation and Analysis of User's Activity Organization," by Liam Bannon, Allen Cypher, Steven Greenspan, and Melissa Monty, analyzes the activities performed by users of computer systems, develops a…
THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauschlicher, C. W.; Ricca, A.; Boersma, C.
The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 {mu}m (5000-5 cm{sup -1}). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyzemore » and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.« less
The Electron Microscopy Outreach Program: A Web-based resource for research and education.
Sosinsky, G E; Baker, T S; Hand, G; Ellisman, M H
1999-01-01
We have developed a centralized World Wide Web (WWW)-based environment that serves as a resource of software tools and expertise for biological electron microscopy. A major focus is molecular electron microscopy, but the site also includes information and links on structural biology at all levels of resolution. This site serves to help integrate or link structural biology techniques in accordance with user needs. The WWW site, called the Electron Microscopy (EM) Outreach Program (URL: http://emoutreach.sdsc.edu), provides scientists with computational and educational tools for their research and edification. In particular, we have set up a centralized resource containing course notes, references, and links to image analysis and three-dimensional reconstruction software for investigators wanting to learn about EM techniques either within or outside of their fields of expertise. Copyright 1999 Academic Press.
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.
Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A; Wetzstein, Gordon
2017-02-28
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays
NASA Astrophysics Data System (ADS)
Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A.; Wetzstein, Gordon
2017-02-01
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
Seismic waveform modeling over cloud
NASA Astrophysics Data System (ADS)
Luo, Cong; Friederich, Wolfgang
2016-04-01
With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.
NASA Astrophysics Data System (ADS)
Aiftimiei, D. C.; Antonacci, M.; Bagnasco, S.; Boccali, T.; Bucchi, R.; Caballer, M.; Costantini, A.; Donvito, G.; Gaido, L.; Italiano, A.; Michelotto, D.; Panella, M.; Salomoni, D.; Vallero, S.
2017-10-01
One of the challenges a scientific computing center has to face is to keep delivering well consolidated computational frameworks (i.e. the batch computing farm), while conforming to modern computing paradigms. The aim is to ease system administration at all levels (from hardware to applications) and to provide a smooth end-user experience. Within the INDIGO- DataCloud project, we adopt two different approaches to implement a PaaS-level, on-demand Batch Farm Service based on HTCondor and Mesos. In the first approach, described in this paper, the various HTCondor daemons are packaged inside pre-configured Docker images and deployed as Long Running Services through Marathon, profiting from its health checks and failover capabilities. In the second approach, we are going to implement an ad-hoc HTCondor framework for Mesos. Container-to-container communication and isolation have been addressed exploring a solution based on overlay networks (based on the Calico Project). Finally, we have studied the possibility to deploy an HTCondor cluster that spans over different sites, exploiting the Condor Connection Broker component, that allows communication across a private network boundary or firewall as in case of multi-site deployments. In this paper, we are going to describe and motivate our implementation choices and to show the results of the first tests performed.
Integrated Data Base Program: a status report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Notz, K.J.; Klein, J.A.
1984-06-01
The Integrated Data Base (IDB) Program provides official Department of Energy (DOE) data on spent fuel and radioactive waste inventories, projections, and characteristics. The accomplishments of FY 1983 are summarized for three broad areas: (1) upgrading and issuing of the annual report on spent fuel and radioactive waste inventories, projections, and characteristics, including ORIGEN2 applications and a quality assurance plan; (2) creation of a summary data file in user-friendly format for use on a personal computer and enhancing user access to program data; and (3) optimizing and documentation of the data handling methodology used by the IDB Program and providingmore » direct support to other DOE programs and sites in data handling. Plans for future work in these three areas are outlined. 23 references, 11 figures.« less
NASA Technical Reports Server (NTRS)
Lee, S. S.; Nwadike, E. V.; Sinha, S. E.
1982-01-01
The theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model are described. Model verification at two sites, a separate user's manual for each model are included. The 3-D model has two forms: free surface and rigid lid. The former allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth, estuaries and coastal regions. The latter is suited for small surface wave heights compared to depth because surface elevation was removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free surface model also provides surface height variations with time.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Tuann, S. Y.; Lee, C. R.
1982-01-01
The six-volume report: describes the theory of a three-dimensional (3-D) mathematical thermal discharge model and a related one-dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.
Web Analytics: A Picture of the Academic Library Web Site User
ERIC Educational Resources Information Center
Black, Elizabeth L.
2009-01-01
This article describes the usefulness of Web analytics for understanding the users of an academic library Web site. Using a case study, the analysis describes how Web analytics can answer questions about Web site user behavior, including when visitors come, the duration of the visit, how they get there, the technology they use, and the most…
StreamStats: A water resources web application
Ries, Kernell G.; Guthrie, John G.; Rea, Alan H.; Steeves, Peter A.; Stewart, David W.
2008-01-01
Streamflow statistics, such as the 1-percent flood, the mean flow, and the 7-day 10-year low flow, are used by engineers, land managers, biologists, and many others to help guide decisions in their everyday work. For example, estimates of the 1-percent flood (the flow that is exceeded, on average, once in 100 years and has a 1-percent chance of being exceeded in any year, sometimes referred to as the 100-year flood) are used to create flood-plain maps that form the basis for setting insurance rates and land-use zoning. This and other streamflow statistics also are used for dam, bridge, and culvert design; water-supply planning and management; water-use appropriations and permitting; wastewater and industrial discharge permitting; hydropower facility design and regulation; and the setting of minimum required streamflows to protect freshwater ecosystems. In addition, researchers, planners, regulators, and others often need to know the physical and climatic characteristics of the drainage basins (basin characteristics) and the influence of human activities, such as dams and water withdrawals, on streamflow upstream from locations of interest to understand the mechanisms that control water availability and quality at those locations. Knowledge of the streamflow network and downstream human activities also is necessary to adequately determine whether an upstream activity, such as a water withdrawal, can be allowed without adversely affecting downstream activities.Streamflow statistics could be needed at any location along a stream. Most often, streamflow statistics are needed at ungaged sites, where no streamflow data are available to compute the statistics. At U.S. Geological Survey (USGS) streamflow data-collection stations, which include streamgaging stations, partial-record stations, and miscellaneous-measurement stations, streamflow statistics can be computed from available data for the stations. Streamflow data are collected continuously at streamgaging stations. Streamflow measurements are collected systematically over a period of years at partial-record stations to estimate peak-flow or low-flow statistics. Streamflow measurements usually are collected at miscellaneous-measurement stations for specific hydrologic studies with various objectives.StreamStats is a Web-based Geographic Information System (GIS) application that was created by the USGS, in cooperation with Environmental Systems Research Institute, Inc. (ESRI)1, to provide users with access to an assortment of analytical tools that are useful for water-resources planning and management. StreamStats functionality is based on ESRI’s ArcHydro Data Model and Tools, described on the Web at http://resources.arcgis.com/en/communities/hydro/01vn0000000s000000.htm. StreamStats allows users to easily obtain streamflow statistics, basin characteristics, and descriptive information for USGS data-collection stations and user-selected ungaged sites. It also allows users to identify stream reaches that are upstream and downstream from user-selected sites, and to identify and obtain information for locations along the streams where activities that may affect streamflow conditions are occurring. This functionality can be accessed through a map-based user interface that appears in the user’s Web browser, or individual functions can be requested remotely as Web services by other Web or desktop computer applications. StreamStats can perform these analyses much faster than historically used manual techniques.StreamStats was designed so that each state would be implemented as a separate application, with a reliance on local partnerships to fund the individual applications, and a goal of eventual full national implementation. Idaho became the first state to implement StreamStats in 2003. By mid-2008, 14 states had applications available to the public, and 18 other states were in various stages of implementation.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
CFL3D Version 6.4-General Usage and Aeroelastic Analysis
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Rumsey, Christopher L.; Biedron, Robert T.
2006-01-01
This document contains the course notes on the computational fluid dynamics code CFL3D version 6.4. It is intended to provide from basic to advanced users the information necessary to successfully use the code for a broad range of cases. Much of the course covers capability that has been a part of previous versions of the code, with material compiled from a CFL3D v5.0 manual and from the CFL3D v6 web site prior to the current release. This part of the material is presented to users of the code not familiar with computational fluid dynamics. There is new capability in CFL3D version 6.4 presented here that has not previously been published. There are also outdated features no longer used or recommended in recent releases of the code. The information offered here supersedes earlier manuals and updates outdated usage. Where current usage supersedes older versions, notation of that is made. These course notes also provides hints for usage, code installation and examples not found elsewhere.
Fast simulation tool for ultraviolet radiation at the earth's surface
NASA Astrophysics Data System (ADS)
Engelsen, Ola; Kylling, Arve
2005-04-01
FastRT is a fast, yet accurate, UV simulation tool that computes downward surface UV doses, UV indices, and irradiances in the spectral range 290 to 400 nm with a resolution as small as 0.05 nm. It computes a full UV spectrum within a few milliseconds on a standard PC, and enables the user to convolve the spectrum with user-defined and built-in spectral response functions including the International Commission on Illumination (CIE) erythemal response function used for UV index calculations. The program accounts for the main radiative input parameters, i.e., instrumental characteristics, solar zenith angle, ozone column, aerosol loading, clouds, surface albedo, and surface altitude. FastRT is based on look-up tables of carefully selected entries of atmospheric transmittances and spherical albedos, and exploits the smoothness of these quantities with respect to atmospheric, surface, geometrical, and spectral parameters. An interactive site, http://nadir.nilu.no/~olaeng/fastrt/fastrt.html, enables the public to run the FastRT program with most input options. This page also contains updated information about FastRT and links to freely downloadable source codes and binaries.
Lightning Simulation and Design Program (LSDP)
NASA Astrophysics Data System (ADS)
Smith, D. A.
This computer program simulates a user-defined lighting configuration. It has been developed as a tool to aid in the design of exterior lighting systems. Although this program is used primarily for perimeter security lighting design, it has potential use for any application where the light can be approximated by a point source. A data base of luminaire photometric information is maintained for use with this program. The user defines the surface area to be illuminated with a rectangular grid and specifies luminaire positions. Illumination values are calculated for regularly spaced points in that area and isolux contour plots are generated. The numerical and graphical output for a particular site mode are then available for analysis. The amount of time spent on point-to-point illumination computation with this progress is much less than that required for tedious hand calculations. The ease with which various parameters can be interactively modified with the progress also reduces the time and labor expended. Consequently, the feasibility of design ideas can be examined, modified, and retested more thoroughly, and overall design costs can be substantially lessened by using this progress as an adjunct to the design process.
Graphical User Interface in Art
NASA Astrophysics Data System (ADS)
Gwilt, Ian
This essay discusses the use of the Graphical User Interface (GUI) as a site of creative practice. By creatively repositioning the GUI as a work of art it is possible to challenge our understanding and expectations of the conventional computer interface wherein the icons and navigational architecture of the GUI no longer function as a technological tool. These artistic recontextualizations are often used to question our engagement with technology and to highlight the pivotal place that the domestic computer has taken in our everyday social, cultural and (increasingly), creative domains. Through these works the media specificity of the screen-based GUI can broken by dramatic changes in scale, form and configuration. This can be seen through the work of new media artists who have re-imagined the GUI in a number of creative forms both, within the digital, as image, animation, net and interactive art, and in the analogue, as print, painting, sculpture, installation and performative event. Furthermore as a creative work, the GUI can also be utilized as a visual way-finder to explore the relationship between the dynamic potentials of the digital and the concretized qualities of the material artifact.
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Legger, F.
2015-12-01
The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.
A Web Based Approach to Integrate Space Culture and Education
NASA Astrophysics Data System (ADS)
Gerla, F.
2002-01-01
Our intention is to dedicate a large section of our web site to space education. As the national User Support and Operation Center (USOC) for the International Space Station, MARS Center is also willing to provide material, such as videos and data, for educational purposes. In order to base our initiative on authoritative precedents, our first step has been a comparative analysis between different space agency education web sites, such as ESA and NASA. As is well known, Internet is a powerful reality, capable of connecting people all over the world and rendering public a huge amount of information. The first problem, then, is to organize this information, in order to use the web as an efficient education tool. That is why studies such as User Modeling (UM), Human Computer Interaction (HCI) and Semantic Web have become more important in Information Technology and Science. Traditional search engines are unable to provide an optimal retrieval of contents really searched for by users. Semantic Web is a valid alternative: according to its theories, web information should be represented using metadata language. Users should be able and enabled to successfully search, obtain and study new information from web. Forging knowledge in an intelligent manner, preventing users from making errors, and making this formidable quantity of information easily available have also been the starting points for HCI methodologies for defining Adaptable Interfaces. Here the information is divided into different sets, on the basis of the intended user profile, in order to prevent users from getting lost. Realized as an adaptable interface, an education web site can help users to effectively retrieve the information necessary for their scopes (teaching for a teacher and learning for a student). For students it's a great advantage to use interfaces designed on the basis of their age and scholastic level. Indeed, an adaptable interface is intended not just for students, but also for teachers, who can use it to prepare their lessons, retrieve information and organize the didactic material in order to support their lessons. We think it important to use a user centered "psychology" based on UM: we have to know the needs and expectations of the students. Our intent is to use usability tests not just to prove the site effectiveness and clearness, but also to investigate aesthetical preferences of children and young people. Physics, mathematics, chemistry are just some of the difficult learning fields connected with space technologies. Space culture is a potentially never-ending field, and our scope will be to lead students by hand in this universe of knowledge. This paper will present MARS activities in the framework of the above methodologies aimed at implementing a web based approach to integrate space culture and education. The activities are already in progress and some results will be presented in the final paper.
NASA Technical Reports Server (NTRS)
Dick, J. W.; Benda, B. J.
1975-01-01
User and programmer oriented documentation for the flexible body option of the Takeoff and Landing Analysis (TOLA) computer program are provided. The user information provides sufficient knowledge of the development and use of the option to enable the engineering user to successfully operate the modified program and understand the results. The programmer's information describes the option structure and logic enabling a programmer to make major revisions to this part of the TOLA computer program.
Secure Web-Site Access with Tickets and Message-Dependent Digests
Donato, David I.
2008-01-01
Although there are various methods for restricting access to documents stored on a World Wide Web (WWW) site (a Web site), none of the widely used methods is completely suitable for restricting access to Web applications hosted on an otherwise publicly accessible Web site. A new technique, however, provides a mix of features well suited for restricting Web-site or Web-application access to authorized users, including the following: secure user authentication, tamper-resistant sessions, simple access to user state variables by server-side applications, and clean session terminations. This technique, called message-dependent digests with tickets, or MDDT, maintains secure user sessions by passing single-use nonces (tickets) and message-dependent digests of user credentials back and forth between client and server. Appendix 2 provides a working implementation of MDDT with PHP server-side code and JavaScript client-side code.
User Account Passwords | High-Performance Computing | NREL
Account Passwords User Account Passwords For NREL's high-performance computing (HPC) systems, learn about user account password requirements and how to set up, log in, and change passwords. Password Logging In the First Time After you request an HPC user account, you'll receive a temporary password. Set
NASA Astrophysics Data System (ADS)
Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.
2003-12-01
Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.
Towards usable and interdisciplinary e-infrastructure (Invited)
NASA Astrophysics Data System (ADS)
de Roure, D.
2010-12-01
e-Science and cyberinfrastucture at their outset tended to focus on ‘big science’ and cross-organisational infrastructures, demonstrating complex engineering with the promise of high returns. It soon became evident that the key to researchers harnessing new technology for everyday use is a user-centric approach which empowers the user - both from a developer and an end user viewpoint. For example, this philosophy is demonstrated in workflow systems for systematic data processing and in the Web 2.0 approach as exemplified by the myExperiment social web site for sharing workflows, methods and ‘research objects’. Hence the most disruptive aspect of Cloud and virtualisation is perhaps that they make new computational resources and applications usable, creating a flourishing ecosystem for routine processing and innovation alike - and in this we must consider software sustainability. This talk will discuss the changing nature of e-Science digital ecosystem, focus on the e-infrastructure for cross-disciplinary work, and highlight issues in sustainable software development in this context.
SimPhospho: a software tool enabling confident phosphosite assignment.
Suni, Veronika; Suomi, Tomi; Tsubosaka, Tomoya; Imanishi, Susumu Y; Elo, Laura L; Corthals, Garry L
2018-03-27
Mass spectrometry combined with enrichment strategies for phosphorylated peptides has been successfully employed for two decades to identify sites of phosphorylation. However, unambiguous phosphosite assignment is considered challenging. Given that site-specific phosphorylation events function as different molecular switches, validation of phosphorylation sites is of utmost importance. In our earlier study we developed a method based on simulated phosphopeptide spectral libraries, which enables highly sensitive and accurate phosphosite assignments. To promote more widespread use of this method, we here introduce a software implementation with improved usability and performance. We present SimPhospho, a fast and user-friendly tool for accurate simulation of phosphopeptide tandem mass spectra. Simulated phosphopeptide spectral libraries are used to validate and supplement database search results, with a goal to improve reliable phosphoproteome identification and reporting. The presented program can be easily used together with the Trans-Proteomic Pipeline and integrated in a phosphoproteomics data analysis workflow. SimPhospho is available for Windows, Linux and Mac operating systems at https://sourceforge.net/projects/simphospho/. It is open source and implemented in C ++. A user's manual with detailed description of data analysis using SimPhospho as well as test data can be found as supplementary material of this article. Supplementary data are available at https://www.btk.fi/research/ computational-biomedicine/software/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barney, B; Shuler, J
2006-08-21
Purple is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Lawrence Livermore National Laboratory (LLNL). The Purple Computational Environment documents the capabilities and the environment provided for the FY06 LLNL Level 1 General Availability Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Sandia National Laboratories, but also documents needs of the LLNL and Alliance users working in the unclassified environment. Additionally,more » the Purple Computational Environment maps the provided capabilities to the Trilab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the General Availability user environment capabilities of the ASC community. Appendix A lists these requirements and includes a description of ACE requirements met and those requirements that are not met for each section of this document. The Purple Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the Tri-lab community.« less
ERIC Educational Resources Information Center
Melrose, S.; And Others
1995-01-01
In this point/counterpoint feature, S. Melrose contends that complex graphical user interfaces (GUIs) threaten the independence and equal employment of individuals with blindness. D. Wakefield then points out that access to the Windows software program for blind computer users is extremely unpredictable, and J. Gill describes a major European…
Communication and collaboration technologies.
Cheeseman, Susan E
2012-01-01
This is the third in a series of columns exploring health information technology (HIT) in the neonatal intensive care unit (NICU). The first column provided background information on the implementation of information technology throughout the health care delivery system, as well as the requisite informatics competencies needed for nurses to fully engage in the digital era of health care. The second column focused on information and resources to master basic computer competencies described by the TIGER initiative (Technology Informatics Guiding Education Reform) as learning about computers, computer networks, and the transfer of data.1 This column will provide additional information related to basic computer competencies, focusing on communication and collaboration technologies. Computers and the Internet have transformed the way we communicate and collaborate. Electronic communication is the ability to exchange information through the use of computer equipment and software.2 Broadly defined, any technology that facilitates linking one or more individuals together is a collaborative tool. Collaboration using technology encompasses an extensive range of applications that enable groups of individuals to work together including e-mail, instant messaging (IM ), and several web applications collectively referred to as Web 2.0 technologies. The term Web 2.0 refers to web applications where users interact and collaborate with each other in a collective exchange of ideas generating content in a virtual community. Examples of Web 2.0 technologies include social networking sites, blogs, wikis, video sharing sites, and mashups. Many organizations are developing collaborative strategies and tools for employees to connect and interact using web-based social media technologies.3.
An Investigation into Web Content Accessibility Guideline Conformance for an Aging Population
ERIC Educational Resources Information Center
Curran, Kevin; Robinson, David
2007-01-01
Poor web site design can cause difficulties for specific groups of users. By applying the Web Content Accessibility Guidelines to a web site, the amount of possible users who can successfully view the content of that site will increase, especially for those who are in the disabled and older adult categories of online users. Older adults are coming…
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 1 2011-01-01 2011-01-01 false User fees for veterinary diagnostic reagents produced at NVSL or other authorized site (excluding FADDL). 130.18 Section 130.18 Animals and... § 130.18 User fees for veterinary diagnostic reagents produced at NVSL or other authorized site...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false User fees for veterinary diagnostic reagents produced at NVSL or other authorized site (excluding FADDL). 130.18 Section 130.18 Animals and... § 130.18 User fees for veterinary diagnostic reagents produced at NVSL or other authorized site...
CMS results in the Combined Computing Readiness Challenge CCRC'08
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Bauerdick, L.; CMS Collaboration
2009-12-01
During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed workflows - are presented and discussed.
Information-seeking behavior changes in community-based teaching practices*†
Byrnes, Jennifer A.; Kulick, Tracy A.; Schwartz, Diane G.
2004-01-01
A National Library of Medicine information access grant allowed for a collaborative project to provide computer resources in fourteen clinical practice sites that enabled health care professionals to access medical information via PubMed and the Internet. Health care professionals were taught how to access quality, cost-effective information that was user friendly and would result in improved patient care. Selected sites were located in medically underserved areas and received a computer, a printer, and, during year one, a fax machine. Participants were provided dial-up Internet service or were connected to the affiliated hospital's network. Clinicians were trained in how to search PubMed as a tool for practicing evidence-based medicine and to support clinical decision making. Health care providers were also taught how to find patient-education materials and continuing education programs and how to network with other professionals. Prior to the training, participants completed a questionnaire to assess their computer skills and familiarity with searching the Internet, MEDLINE, and other health-related databases. Responses indicated favorable changes in information-seeking behavior, including an increased frequency in conducting MEDLINE searches and Internet searches for work-related information. PMID:15243639
PREMER: a Tool to Infer Biological Networks.
Villaverde, Alejandro F; Becker, Kolja; Banga, Julio R
2017-10-04
Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features - such as distinguishing between direct and indirect interactions or determining the direction of a causal link - requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux and OSX (https://sites.google.com/site/premertoolbox/).
Nakato, Ryuichiro; Itoh, Tahehiko; Shirahige, Katsuhiko
2013-07-01
Chromatin immunoprecipitation with high-throughput sequencing (ChIP-seq) can identify genomic regions that bind proteins involved in various chromosomal functions. Although the development of next-generation sequencers offers the technology needed to identify these protein-binding sites, the analysis can be computationally challenging because sequencing data sometimes consist of >100 million reads/sample. Herein, we describe a cost-effective and time-efficient protocol that is generally applicable to ChIP-seq analysis; this protocol uses a novel peak-calling program termed DROMPA to identify peaks and an additional program, parse2wig, to preprocess read-map files. This two-step procedure drastically reduces computational time and memory requirements compared with other programs. DROMPA enables the identification of protein localization sites in repetitive sequences and efficiently identifies both broad and sharp protein localization peaks. Specifically, DROMPA outputs a protein-binding profile map in pdf or png format, which can be easily manipulated by users who have a limited background in bioinformatics. © 2013 The Authors Genes to Cells © 2013 by the Molecular Biology Society of Japan and Wiley Publishing Asia Pty Ltd.
Cooperative processing user interfaces for AdaNET
NASA Technical Reports Server (NTRS)
Gutzmann, Kurt M.
1991-01-01
A cooperative processing user interface (CUI) system shares the task of graphical display generation and presentation between the user's computer and a remote host. The communications link between the two computers is typically a modem or Ethernet. The two main purposes of a CUI are reduction of the amount of data transmitted between user and host machines, and provision of a graphical user interface system to make the system easier to use.
Halder, S; Käthner, I; Kübler, A
2016-02-01
Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
PETSc Users Manual Revision 3.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Brown, J.; Buschelman, K.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself; For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
PETSc Users Manual Revision 3.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Brown, J.; Buschelman, K.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself; For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
PETSc Users Manual Revision 3.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself. ;For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The ARES (Automated Residential Energy Standard) User`s Guide is designed to the user successfully operate the ARES computer program. This guide assumes that the user is familiar with basic PC skills such as using a keyboard and loading a disk drive. The ARES computer program was designed to assist building code officials in creating a residential energy standard based on local climate and costs.
Research on Influence of Cloud Environment on Traditional Network Security
NASA Astrophysics Data System (ADS)
Ming, Xiaobo; Guo, Jinhua
2018-02-01
Cloud computing is a symbol of the progress of modern information network, cloud computing provides a lot of convenience to the Internet users, but it also brings a lot of risk to the Internet users. Second, one of the main reasons for Internet users to choose cloud computing is that the network security performance is great, it also is the cornerstone of cloud computing applications. This paper briefly explores the impact on cloud environment on traditional cybersecurity, and puts forward corresponding solutions.
Working without a Crystal Ball: Predicting Web Trends for Web Services Librarians
ERIC Educational Resources Information Center
Ovadia, Steven
2008-01-01
User-centered design is a principle stating that electronic resources, like library Web sites, should be built around the needs of the users. This article interviews Web developers of library and non-library-related Web sites, determining how they assess user needs and how they decide to adapt certain technologies for users. According to the…
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays
Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Wetzstein, Gordon
2017-01-01
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one. PMID:28193871
A Distributed User Information System
1990-03-01
NOE08 Department of Computer Science NOVO 8 1990 University of Maryland S College Park, MD 20742 D Abstract Current user information database technology ...Transactions on Computer Systems, May 1988. [So189] K. Sollins. A plan for internet directory services. Technical report, DDN Network Information Center...2424 A Distributed User Information System DTiC Steven D. Miller, Scott Carson, and Leo Mark DELECTE Institute for Advanced Computer Studies and
ERIC Educational Resources Information Center
Spicer-Sutton, Jama; Lampley, James; Good, Donald W.
2014-01-01
The purpose of this study was to determine a student's computer knowledge upon course entry and if there was a difference in college students' improvement scores as measured by the difference in pretest and post-test scores of new or novice users, moderate users, and expert users at the end of a college level introductory computing class. This…
Multiple-User, Multitasking, Virtual-Memory Computer System
NASA Technical Reports Server (NTRS)
Generazio, Edward R.; Roth, Don J.; Stang, David B.
1993-01-01
Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.
ERIC Educational Resources Information Center
Spicer-Sutton, Jama
2013-01-01
The purpose of this study was to determine a student's computer knowledge upon course entry and if there was a difference in college students' improvement scores as measured by the difference in pretest and posttest scores of new or novice users, moderate users, and expert users at the end of a college-level introductory computing class. This…
ERIC Educational Resources Information Center
Manzano, Sancho J., Jr.
2012-01-01
Empirical studies have been conducted on what is known as end-user computing from as early as the 1980s to present-day IT employees. There have been many studies on using quantitative instruments by Cotterman and Kumar (1989) and Rockart and Flannery (1983). Qualitative studies on end-user computing classifications have been conducted by…
reCAPTCHA: human-based character recognition via Web security measures.
von Ahn, Luis; Maurer, Benjamin; McMillen, Colin; Abraham, David; Blum, Manuel
2008-09-12
CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) are widespread security measures on the World Wide Web that prevent automated programs from abusing online services. They do so by asking humans to perform a task that computers cannot yet perform, such as deciphering distorted characters. Our research explored whether such human effort can be channeled into a useful purpose: helping to digitize old printed material by asking users to decipher scanned words from books that computerized optical character recognition failed to recognize. We showed that this method can transcribe text with a word accuracy exceeding 99%, matching the guarantee of professional human transcribers. Our apparatus is deployed in more than 40,000 Web sites and has transcribed over 440 million words.
The Figure.tar.gz contains a directory for each WRF ensemble run. In these directories are *.csv files for each meteorology variable examined. These are comma delimited text files that contain statistics for each observation site. Also provided is an R script that reads these files (user would need to change directory pointers) and computes the variability of error and bias of the ensemble at each site and plots these for reproduction of figure 3.This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
CMS users data management service integration and first experiences with its NoSQL data storage
NASA Astrophysics Data System (ADS)
Riahi, H.; Spiga, D.; Boccali, T.; Ciangottini, D.; Cinquilli, M.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Santocchia, A.
2014-06-01
The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site. The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly nearly 200k users' files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and reports to users and service operators, while being highly available. The associated data volume represents a new set of challenges in the areas of database scalability and service performance and efficiency. In this paper, we present an overview of the AsyncStageOut model and the integration strategy with the Common Analysis Framework. The motivations for using the NoSQL technology are also presented, as well as data design and the techniques used for efficient indexing and monitoring of the data. We describe deployment model for the high availability and scalability of the service. We also discuss the hardware requirements and the results achieved as they were determined by testing with actual data and realistic loads during the commissioning and the initial production phase with the Common Analysis Framework.
PandASoft: Open Source Instructional Laboratory Administration Software
NASA Astrophysics Data System (ADS)
Gay, P. L.; Braasch, P.; Synkova, Y. N.
2004-12-01
PandASoft (Physics and Astronomy Software) is software for organizing and archiving a department's teaching resources and materials. An easy to use, secure interface allows faculty and staff to explore equipment inventories, see what laboratory experiments are available, find handouts, and track what has been used in different classes in the past. Divided into five sections: classes, equipment, laboratories, links, and media, its database cross links materials, allowing users to see what labs are used with which classes, what media and equipment are used with which labs, or simply what equipment is lurking in which room. Written in PHP and MySQL, this software can be installed on any UNIX / Linux platform, including Macintosh OS X. It is designed to allow users to easily customize the headers, footers and colors to blend with existing sites - no programming experience required. While initial data input is labor intensive, the system will save time later by allowing users to quickly answer questions related to what is in inventory, where it is located, how many are in stock, and where online they can learn more. It will also provide a central location for storing PDFs of handouts, and links to applets and cool sites at other universities. PandASoft comes with over 100 links to online resources pre-installed. We would like to thank Dr. Wolfgang Rueckner and the Harvard University Science Center for providing computers and resources for this project.
Shih, Arthur Chun-Chieh; Lee, DT; Peng, Chin-Lin; Wu, Yu-Wei
2007-01-01
Background When aligning several hundreds or thousands of sequences, such as epidemic virus sequences or homologous/orthologous sequences of some big gene families, to reconstruct the epidemiological history or their phylogenies, how to analyze and visualize the alignment results of many sequences has become a new challenge for computational biologists. Although there are several tools available for visualization of very long sequence alignments, few of them are applicable to the alignments of many sequences. Results A multiple-logo alignment visualization tool, called Phylo-mLogo, is presented in this paper. Phylo-mLogo calculates the variabilities and homogeneities of alignment sequences by base frequencies or entropies. Different from the traditional representations of sequence logos, Phylo-mLogo not only displays the global logo patterns of the whole alignment of multiple sequences, but also demonstrates their local homologous logos for each clade hierarchically. In addition, Phylo-mLogo also allows the user to focus only on the analysis of some important, structurally or functionally constrained sites in the alignment selected by the user or by built-in automatic calculation. Conclusion With Phylo-mLogo, the user can symbolically and hierarchically visualize hundreds of aligned sequences simultaneously and easily check the changes of their amino acid sites when analyzing many homologous/orthologous or influenza virus sequences. More information of Phylo-mLogo can be found at URL . PMID:17319966
1988-03-01
structure of the interface is a mapping from the physical world [for example, the use of icons, which S have inherent meaning to users but represent...design alternatives. Mechanisms for linking the user to the computer include physical devices (keyboards), actions taken with the devices (keystrokes...VALUATION AIDES TEMLATEI IITCOM1I LATOR IACTICAL KNOWLEDGE ACGIUISITION MICNnII t 1 Fig. 9. INTACVAL. * OtJiCTs ARE PHYSICAL ENTITIES OR CONCEPTUAL EN
Characteristics and Effectiveness of the U.S. State E-Government-to-Business Services
ERIC Educational Resources Information Center
Zhao, Jensen J.; Truell, Allen; Alexander, Melody W.
2008-01-01
This study examined the user-interface characteristics and effectiveness of the e-government-to-business (G2B) sites of the 50 U.S. states and Washington, D.C. A group of 306 online users were trained to assess the sites. The findings indicate that the majority of the state G2B sites included the user-interface characteristics that provided online…
Stated choice models for predicting the impact of user fees at public recreation sites
Herbert W. Schroeder; Jordan Louviere
1999-01-01
A crucial question in the implementation of fee programs is how the users of recreation sites will respond to various levels and types of fees. Stated choice models can help managers anticipate the impact of user fees on people's choices among the alternative recreation sites available to them. Models developed for both day and overnight trips to several areas and...
Encryption for Remote Control via Internet or Intranet
NASA Technical Reports Server (NTRS)
Lineberger, Lewis
2005-01-01
A data-communication protocol has been devised to enable secure, reliable remote control of processes and equipment via a collision-based network, while using minimal bandwidth and computation. The network could be the Internet or an intranet. Control is made secure by use of both a password and a dynamic key, which is sent transparently to a remote user by the controlled computer (that is, the computer, located at the site of the equipment or process to be controlled, that exerts direct control over the process). The protocol functions in the presence of network latency, overcomes errors caused by missed dynamic keys, and defeats attempts by unauthorized remote users to gain control. The protocol is not suitable for real-time control, but is well suited for applications in which control latencies up to about 0.5 second are acceptable. The encryption scheme involves the use of both a dynamic and a private key, without any additional overhead that would degrade performance. The dynamic key is embedded in the equipment- or process-monitor data packets sent out by the controlled computer: in other words, the dynamic key is a subset of the data in each such data packet. The controlled computer maintains a history of the last 3 to 5 data packets for use in decrypting incoming control commands. In addition, the controlled computer records a private key (password) that is given to the remote computer. The encrypted incoming command is permuted by both the dynamic and private key. A person who records the command data in a given packet for hostile purposes cannot use that packet after the public key expires (typically within 3 seconds). Even a person in possession of an unauthorized copy of the command/remote-display software cannot use that software in the absence of the password. The use of a dynamic key embedded in the outgoing data makes the central-processing unit overhead very small. The use of a National Instruments DataSocket(TradeMark) (or equivalent) protocol or the User Datagram Protocol makes it possible to obtain reasonably short response times: Typical response times in event-driven control, using packets sized .300 bytes, are <0.2 second for commands issued from locations anywhere on Earth. The protocol requires that control commands represent absolute values of controlled parameters (e.g., a specified temperature), as distinguished from changes in values of controlled parameters (e.g., a specified increment of temperature). Each command is issued three or more times to ensure delivery in crowded networks. The use of absolute-value commands prevents additional (redundant) commands from causing trouble. Because a remote controlling computer receives "talkback" in the form of data packets from the controlled computer, typically within a time interval < or =1 s, the controlling computer can re-issue a command if network failure has occurred. The controlled computer, the process or equipment that it controls, and any human operator(s) at the site of the controlled equipment or process should be equipped with safety measures to prevent damage to equipment or injury to humans. These features could be a combination of software, external hardware, and intervention by the human operator(s). The protocol is not fail-safe, but by adopting these safety measures as part of the protocol, one makes the protocol a robust means of controlling remote processes and equipment by use of typical office computers via intranets and/or the Internet.
Fog computing job scheduling optimization based on bees swarm
NASA Astrophysics Data System (ADS)
Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid
2018-04-01
Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.
Cloud flexibility using DIRAC interware
NASA Astrophysics Data System (ADS)
Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo
2014-06-01
Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several VOs. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine, which is transparent to the user.
Bartha, Michael C; Allie, Paul; Kokot, Douglas; Roe, Cynthia Purvis
2015-01-01
Computer users continue to report eye and upper body discomfort even as workstation flexibility has improved. Research shows a relationship between character size, viewing distance, and reading performance. Few reports exist regarding text height viewed under normal office work conditions and eye discomfort. This paper reports self-selected computer display placement, text characteristics, and subjective comfort for older and younger computer workers under real-world conditions. Computer workers were provided with monitors and adjustable display support(s). In Study 1, older workers wearing progressive-addition lenses (PALs) were observed. In study 2, older workers wearing multifocal lenses and younger workers were observed. Workers wearing PALs experienced less eye and body discomfort with adjustable displays, and less eye and neck discomfort for text visual angles near or greater than ergonomic recommendations. Older workers wearing multifocal correction positioned displays much lower than younger workers. In general, computer users did not adjust character size to ensure that fovial images of text fell within the recommended range. Ergonomic display placement recommendations should be different for computer users wearing multifocal correction for presbyopia. Ergonomic training should emphasize adjusting text size for user comfort.
Problematics of different technical maintenance for computers
NASA Technical Reports Server (NTRS)
Dostalek, Z.
1977-01-01
Two modes of operations are used in the technical maintenance of computers: servicing provided by the equipment supplier, and that done by specially trained computer users. The advantages and disadvantages of both modes are discussed. Maintenance downtime is tabulated for two computers serviced by user employees over an eight year period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martins, S.A.; Shinn, J.H.
1993-05-01
The Chemical Hazard Warning System (CHAWS) is designed to collect meteorological data and to display, in real time, the dispersion of hazardous chemicals that may result from an accidental release. Meteorological sensors have been placed strategically around the Lexington-Blue Grass Army Depot and are used to calculate direction and hazard distance for the release. Based on these data, arrows depicting the release direction and distance traveled are graphically displayed on a computer screen showing a site map of the facility. The objectives of CHAWS are as follows: To determine the trajectory of the center of mass of released material frommore » the measured wind field; to calculate the dispersion of the released material based on the measured lateral turbulence intensity (sigma theta); to determine the height of the mixing zone by measurement of the inversion height and wind profiles up to an altitude of about 1 km at sites that have SODAR units installed; to archive meteorological data for potential use in climatological descriptions for emergency planning; to archive air-quality data for preparation of compliance reports; and to provide access to the data for near real time hazard analysis purposes. CHAWS sites are located at the Pine Bluff Arsenal, Arkansas, Edgewood area of Aberdeen Proving Ground, Maryland, Tooele Depot, Utah, Lexington-Blue Grass Depot, Kentucky, and Johnston Island in the Pacific. The systems vary between sites with different features and various types of hardware. The basic system, however, is the same. Nonetheless, we have tailored the manuals to the equipment found at each site.« less
Bhargava, Rahul; Kumar, Prachi; Kaur, Avinash; Kumar, Manjushri; Mishra, Anurag
2014-07-01
To compare the diagnostic value and accuracy of dry eye scoring system (DESS), conjunctival impression cytology (CIC), tear film breakup time (TBUT), and Schirmer's test in computer users. A case-control study was done at two referral eye centers. Eyes of 344 computer users were compared to 371 eyes of age and sex matched controls. Dry eye questionnaire (DESS) was administered to both groups and they further underwent measurement of TBUT, Schirmer's, and CIC. Correlation analysis was performed between DESS, CIC, TBUT, and Schirmer's test scores. A Pearson's coefficient of the linear expression (R (2)) of 0.5 or more was statistically significant. The mean age in cases (26.05 ± 4.06 years) was comparable to controls (25.67 ± 3.65 years) (P = 0.465). The mean symptom score in computer users was significantly higher as compared to controls (P < 0.001). Mean TBUT, Schirmer's test values, and goblet cell density were significantly reduced in computer users (P < 0.001). TBUT, Schirmer's, and CIC were abnormal in 48.5%, 29.1%, and 38.4% symptomatic computer users respectively as compared to 8%, 6.7%, and 7.3% symptomatic controls respectively. On correlation analysis, there was a significant (inverse) association of dry eye symptoms (DESS) with TBUT and CIC scores (R (2) > 0.5), in contrast to Schirmer's scores (R(2) < 0.5). Duration of computer usage had a significant effect on dry eye symptoms severity, TBUT, and CIC scores as compared to Schirmer's test. DESS should be used in combination with TBUT and CIC for dry eye evaluation in computer users.
Guest, G F
2000-08-15
At the onset of the new millennium the Internet has become the new standard means of distributing information. In the last two to three years there has been an explosion of e-commerce with hundreds of new web sites being created every minute. For most corporate entities, a web site is as essential as the phone book listing used to be. Twenty years ago technologist directed how computer-based systems were utilized. Now it is the end users of personal computers that have gained expertise and drive the functionality of software applications. The computer, initially invented for mathematical functions, has transitioned from this role to an integrated communications device that provides the portal to the digital world. The Web needs to be used by healthcare professionals, not only for professional activities, but also for instant access to information and services "just when they need it." This will facilitate the longitudinal use of information as society continues to gain better information access skills. With the demand for current "just in time" information and the standards established by Internet protocols, reference sources of information may be maintained in dynamic fashion. News services have been available through the Internet for several years, but now reference materials such as online journals and digital textbooks have become available and have the potential to change the traditional publishing industry. The pace of change should make us consider Will Rogers' advice, "It isn't good enough to be moving in the right direction. If you are not moving fast enough, you can still get run over!" The intent of this article is to complement previous articles on Internet Resources published in this journal, by presenting information about web sites that present information on computer and Internet technologies, reference materials, news information, and information that lets us improve personal productivity. Neither the author, nor the Journal endorses any of the sites or products listed, but include these references and links as a matter of convenience for its readers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil,Benny Manuel; Ballance, Robert; Haskell, Karen
Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model ismore » focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.« less
User-Centric Secure Cross-Site Interaction Framework for Online Social Networking Services
ERIC Educational Resources Information Center
Ko, Moo Nam
2011-01-01
Social networking service is one of major technological phenomena on Web 2.0. Hundreds of millions of users are posting message, photos, and videos on their profiles and interacting with other users, but the sharing and interaction are limited within the same social networking site. Although users can share some content on a social networking site…
Marketing E-Commerce by Social media using Product Recommendations and user Embedding
NASA Astrophysics Data System (ADS)
Ramalingam, V. V.; Pandian, A.; Masilamani, Kirthiga
2018-04-01
MarketingE-CommercebySodal media is the best way to improve marketing and business widely.The major issues faced with E-commerce and Social media interfacing is cold- start cross-site problem. The cold-start problem occurs at a situation when user is not having the history of purchase records.For the user who does not have a history of purchase records, we have introduced a method of finding the users’ interested product without knowing any of the demographic information of the user. The product is recommended on basesof visits i.e., the item which is most likely to be visited by the users occur in the hit list. This product is rated at the top position for the users to purchase. The e-commerce with social media sites uses the strategy of user embedding and product recommendations. The product recommendations are achieved by incorporating LatentDirichlet Allocation(LDA), Re Ranking and Collaborative Filtering algorithms. The proposed framework can enhance the recommendation system by embedding products and users. This shows the potential of solving cold-start cross-site problem across the e-commerce and social media sites and enhances the marketing strategy.
Skeels, Meredith M; Kurth, Ann; Clausen, Marc; Severynen, Anneleen; Garcia-Smith, Hal
2006-01-01
CARE+ is a tablet PC-based computer counseling tool designed to support medication adherence and secondary HIV prevention for people living with HIV. Thirty HIV+ men and women participated in our user study to assess usability and attitudes towards CARE+. We observed them using CARE+ for the first time and conducted a semi-structured interview afterwards. Our findings suggest computer counseling may reduce social bias and encourage participants to answer questions honestly. Participants felt that discussing sensitive subjects with a computer instead of a person reduced feelings of embarrassment and being judged, and promoted privacy. Results also confirm that potential users think computers can provide helpful counseling, and that many also want human counseling interaction. Our study also revealed that tablet PC-based applications are usable by our population of mixed experience computer users. Computer counseling holds great potential for providing assessment and health promotion to individuals with chronic conditions such as HIV.
Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu
2015-01-01
The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.
NASA Technical Reports Server (NTRS)
Huffman, S.
1977-01-01
Detailed instructions on the use of two computer-aided-design programs for designing the energy storage inductor for single winding and two winding dc to dc converters are provided. Step by step procedures are given to illustrate the formatting of user input data. The procedures are illustrated by eight sample design problems which include the user input and the computer program output.
Tsai, Tsai-Hsuan; Nash, Robert J; Tseng, Kevin C
2009-05-01
This article presents how the researcher goes about answering the research question, 'how assistive technology impacts computer use among individuals with cervical spinal cord injury?' through an in-depth investigation into the real-life situations among computer operators with cervical spinal cord injuries (CSI). An in-depth survey was carried out to provide an insight into the function abilities and limitation, habitual practice and preference, choices and utilisation of input devices, personal and/or technical assistance, environmental set-up and arrangements and special requirements among 20 experienced computer users with cervical spinal cord injuries. Following the survey findings, a five-layer CSI users' needs hierarchy of input device selection and use was proposed. These needs were ranked in order: beginning with the most basic criterion at the bottom of the pyramid; lower-level criteria must be met before one moves onto the higher level. The users' needs hierarchy for CSI computer users, which had not been applied by previous research work and which has established a rationale for the development of alternative input devices. If an input device achieves the criteria set up in the needs hierarchy, then a good match of person and technology will be achieved.
Chi, Chia-Fen; Tseng, Li-Kai; Jang, Yuh
2012-07-01
Many disabled individuals lack extensive knowledge about assistive technology, which could help them use computers. In 1997, Denis Anson developed a decision tree of 49 evaluative questions designed to evaluate the functional capabilities of the disabled user and choose an appropriate combination of assistive devices, from a selection of 26, that enable the individual to use a computer. In general, occupational therapists guide the disabled users through this process. They often have to go over repetitive questions in order to find an appropriate device. A disabled user may require an alphanumeric entry device, a pointing device, an output device, a performance enhancement device, or some combination of these. Therefore, the current research eliminates redundant questions and divides Anson's decision tree into multiple independent subtrees to meet the actual demand of computer users with disabilities. The modified decision tree was tested by six disabled users to prove it can determine a complete set of assistive devices with a smaller number of evaluative questions. The means to insert new categories of computer-related assistive devices was included to ensure the decision tree can be expanded and updated. The current decision tree can help the disabled users and assistive technology practitioners to find appropriate computer-related assistive devices that meet with clients' individual needs in an efficient manner.
The Graphical User Interface: Crisis, Danger, and Opportunity.
ERIC Educational Resources Information Center
Boyd, L. H.; And Others
1990-01-01
This article describes differences between the graphical user interface and traditional character-based interface systems, identifies potential problems posed by graphic computing environments for blind computer users, and describes some programs and strategies that are being developed to provide access to those environments. (Author/JDD)
Carlsson, Lars; Spjuth, Ola; Adams, Samuel; Glen, Robert C; Boyer, Scott
2010-07-01
Predicting metabolic sites is important in the drug discovery process to aid in rapid compound optimisation. No interactive tool exists and most of the useful tools are quite expensive. Here a fast and reliable method to analyse ligands and visualise potential metabolic sites is presented which is based on annotated metabolic data, described by circular fingerprints. The method is available via the graphical workbench Bioclipse, which is equipped with advanced features in cheminformatics. Due to the speed of predictions (less than 50 ms per molecule), scientists can get real time decision support when editing chemical structures. Bioclipse is a rich client, which means that all calculations are performed on the local computer and do not require network connection. Bioclipse and MetaPrint2D are free for all users, released under open source licenses, and available from http://www.bioclipse.net.
Programming of a flexible computer simulation to visualize pharmacokinetic-pharmacodynamic models.
Lötsch, J; Kobal, G; Geisslinger, G
2004-01-01
Teaching pharmacokinetic-pharmacodynamic (PK/PD) models can be made more effective using computer simulations. We propose the programming of educational PK or PK/PD computer simulations as an alternative to the use of pre-built simulation software. This approach has the advantage of adaptability to non-standard or complicated PK or PK/PD models. Simplicity of the programming procedure was achieved by selecting the LabVIEW programming environment. An intuitive user interface to visualize the time courses of drug concentrations or effects can be obtained with pre-built elements. The environment uses a wiring analogy that resembles electrical circuit diagrams rather than abstract programming code. The goal of high interactivity of the simulation was attained by allowing the program to run in continuously repeating loops. This makes the program behave flexibly to the user input. The programming is described with the aid of a 2-compartment PK simulation. Examples of more sophisticated simulation programs are also given where the PK/PD simulation shows drug input, concentrations in plasma, and at effect site and the effects themselves as a function of time. A multi-compartmental model of morphine, including metabolite kinetics and effects is also included. The programs are available for download from the World Wide Web at http:// www. klinik.uni-frankfurt.de/zpharm/klin/ PKPDsimulation/content.html. For pharmacokineticists who only program occasionally, there is the possibility of building the computer simulation, together with the flexible interactive simulation algorithm for clinical pharmacological teaching in the field of PK/PD models.
Yaghmaie, Farideh; Jayasuriya, Rohan
2004-01-01
There have been many changes made to information systems in the last decade. Changes in information systems require users constantly to update their computer knowledge and skills. Computer training is a critical issue for any user because it offers them considerable new skills. The purpose of this study was to measure the effects of 'subjective computer training' and management support on attitudes to computers, computer anxiety and subjective norms to use computers. The data were collected from community health centre staff. The results of the study showed that health staff trained in computer use had more favourable attitudes to computers, less computer anxiety and more awareness of others' expectations about computer use than untrained users. However, there was no relationship between management support and computer attitude, computer anxiety or subjective norms. Lack of computer training for the majority of healthcare staff confirmed the need for more attention to this issue, particularly in health centres.
Patrick C. West
1981-01-01
It has been suggested that on-site surveys of user fail to measure crowding accurately because long time users who knew the area before the "crowds" came tend to feel the most crowded, and thus do not return. Such "displaced" users would not be included in current on-site survey samples. Results from a limited test at the Sylvania Recreation Area...
ERIC Educational Resources Information Center
Vergo, John; Karat, Clare-Marie; Karat, John; Pinhanez, Claudio; Arora, Renee; Cofino, Thomas; Riecken, Doug; Podlaseck, Mark
This paper summarizes a 10-month long research project conducted at the IBM T.J. Watson Research Center aimed at developing the design concept of a multi-institutional art and culture web site. The work followed a user-centered design (UCD) approach, where interaction with prototypes and feedback from potential users of the web site were sought…
Code of Federal Regulations, 2011 CFR
2011-07-01
... through a Web Site. Web Site is a site located on the World Wide Web that can be located by an end user... transmitted over the Internet during the relevant period to all end users within the United States from all...
Code of Federal Regulations, 2014 CFR
2014-07-01
... through a Web Site. Web Site is a site located on the World Wide Web that can be located by an end user... transmitted over the Internet during the relevant period to all end users within the United States from all...
NASA Technical Reports Server (NTRS)
Nichols, Kelvin F.; Best, Susan; Schneider, Larry
2004-01-01
With so many security issues involved with wireless networks, the technology has not been fully utilized in the area of mission critical applications. These applications would include the areas of telemetry, commanding, voice and video. Wireless networking would allow payload operators the mobility to take computers outside of the control room to their offices and anywhere else in the facility that the wireless network was extended. But the risk is too great of having someone sit just inside of your wireless network coverage and intercept enough of your network traffic to steal proprietary data from a payload experiment or worse yet hack back into your system and do even greater harm by issuing harmful commands. Wired Equivalent Privacy (WEP) is improving but has a ways to go before it can be trusted to protect mission critical data. Today s hackers are becoming more aggressive and innovative, and in order to take advantage of the benefits that wireless networking offer, appropriate security measures need to be in place that will thwart hackers. The Virtual Private Network (VPN) offers a solution to the security problems that have kept wireless networks from being used for mission critical applications. VPN provides a level of encryption that will ensure that data is protected while it is being transmitted over a wireless local area network (IAN). The VPN allows a user to authenticate to the site that the user needs to access. Once this authentication has taken place the network traffic between that site and the user is encapsulated in VPN packets with the Triple Data Encryption Standard (3DES). 3DES is an encryption standard that uses a single secret key to encrypt and decrypt data. The length of the encryption key is 168 bits as opposed to its predecessor DES that has a 56-bit encryption key. Even though 3DES is the common encryption standard for today, the Advance Encryption Standard (AES), which provides even better encryption at a lower cycle cost is growing acceptance. The user computer running the VPN client and the. target site that is running the . VPN firewall exchange this encryption key and therefore are the only ones that are able to decipher the data. The level of encryption offered by the VPN is making it possible for wireless networks to pass the strict security policies that have kept them from being used in the past. Now people will be able to benefit from the many advantages that wireless networking has to offer in the area of mission critical applications.
NASA Technical Reports Server (NTRS)
Nichols, Kelvin F.; Best, Susan; Schneider, Larry
2004-01-01
With so many security issues involved with wireless networks, the technology has not been fully utilized in the area of mission critical applications. These applications would include the areas of telemetry, commanding, voice and video. Wireless networking would allow payload operators the mobility to take computers outside of the control room to their off ices and anywhere else in the facility that the wireless network was extended. But the risk is too great of having someone sit just inside of your wireless network coverage and intercept enough of your network traffic to steal proprietary data from a payload experiment or worse yet hack back into your system and do even greater harm by issuing harmful commands. Wired Equivalent Privacy (WEP) is improving but has a ways to go before it can be trusted to protect mission critical data. Today s hackers are becoming more aggressive and innovative, and in order to take advantage of the benefits that wireless networking offer, appropriate security measures need to be in place that will thwart hackers. The Virtual Private Network (VPN) offers a solution to the security problems that have kept wireless networks from being used for mission critical applications. VPN provides a level of encryption that will ensure that data is protected while it is being transmitted over a wireless local area network (LAN). The VPN allows a user to authenticate to the site that the user needs to access. Once this authentication has taken place the network traffic between that site and the user is encapsulated in VPN packets with the Triple Data Encryption Standard (3DES). 3DES is an encryption standard that uses a single secret key to encrypt and decrypt data. The length of the encryption key is 168 bits as opposed to its predecessor DES that has a 56-bit encryption key. Even though 3DES is the common encryption standard for today, the Advance Encryption Standard (AES), which provides even better encryption at a lower cycle cost is growing acceptance. The user computer running the VPN client and the target site that is running the VPN firewall exchange this encryption key and therefore are the only ones that are able to decipher the data. The level of encryption offered by the VPN is making it possible for wireless networks to pass the strict security policies that have kept them from being used in the past. Now people will be able to benefit from the many advantages that wireless networking has to offer in the area of mission critical applications.
Earth Science Data Grid System
NASA Astrophysics Data System (ADS)
Chi, Y.; Yang, R.; Kafatos, M.
2004-12-01
The Earth Science Data Grid System (ESDGS) is a software in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We are also developing additional services of 1) metadata management, 2) geospatial, temporal, and content-based indexing, and 3) near/on site data processing, in response to the unique needs of Earth science applications. In this paper, we will describe the software architecture and components of the system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.
Commissioning of a CERN Production and Analysis Facility Based on xrootd
NASA Astrophysics Data System (ADS)
Campana, Simone; van der Ster, Daniel C.; Di Girolamo, Alessandro; Peters, Andreas J.; Duellmann, Dirk; Coelho Dos Santos, Miguel; Iven, Jan; Bell, Tim
2011-12-01
The CERN facility hosts the Tier-0 of the four LHC experiments, but as part of WLCG it also offers a platform for production activities and user analysis. The CERN CASTOR storage technology has been extensively tested and utilized for LHC data recording and exporting to external sites according to experiments computing model. On the other hand, to accommodate Grid data processing activities and, more importantly, chaotic user analysis, it was realized that additional functionality was needed including a different throttling mechanism for file access. This paper will describe the xroot-based CERN production and analysis facility for the ATLAS experiment and in particular the experiment use case and data access scenario, the xrootd redirector setup on top of the CASTOR storage system, the commissioning of the system and real life experience for data processing and data analysis.
NASA Technical Reports Server (NTRS)
Perry, Charleen M.; Vansteenberg, Michael E.
1992-01-01
The National Space Science Data Center (NSSDC) has developed an automated data retrieval request service utilizing our Data Archive and Distribution Service (NDADS) computer system. NDADS currently has selected project data written to optical disk platters with the disks residing in a robotic 'jukebox' near-line environment. This allows for rapid and automated access to the data with no staff intervention required. There are also automated help information and user services available that can be accessed. The request system permits an average-size data request to be completed within minutes of the request being sent to NSSDC. A mail message, in the format described in this document, retrieves the data and can send it to a remote site. Also listed in this document are the data currently available.