Sample records for access large amounts

  1. Flexible services for the support of research.

    PubMed

    Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John

    2013-01-28

    Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.

  2. A Database as a Service for the Healthcare System to Store Physiological Signal Data.

    PubMed

    Chang, Hsien-Tsung; Lin, Tsai-Huei

    2016-01-01

    Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records- 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users-we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance.

  3. A Database as a Service for the Healthcare System to Store Physiological Signal Data

    PubMed Central

    Lin, Tsai-Huei

    2016-01-01

    Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records– 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users—we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance. PMID:28033415

  4. A Cerebellar-model Associative Memory as a Generalized Random-access Memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1989-01-01

    A versatile neural-net model is explained in terms familiar to computer scientists and engineers. It is called the sparse distributed memory, and it is a random-access memory for very long words (for patterns with thousands of bits). Its potential utility is the result of several factors: (1) a large pattern representing an object or a scene or a moment can encode a large amount of information about what it represents; (2) this information can serve as an address to the memory, and it can also serve as data; (3) the memory is noise tolerant--the information need not be exact; (4) the memory can be made arbitrarily large and hence an arbitrary amount of information can be stored in it; and (5) the architecture is inherently parallel, allowing large memories to be fast. Such memories can become important components of future computers.

  5. Duration of extinction trials as a determinant of instrumental extinction in terrestrial toads (Rhinella arenarum).

    PubMed

    Puddington, Martín M; Papini, Mauricio R; Muzio, Rubén N

    2018-01-01

    Instrumental learning guides behavior toward resources. When such resources are no longer available, approach to previously reinforced locations is reduced, a process called extinction. The present experiments are concerned with factors affecting the extinction of acquired behaviors in toads. In previous experiments, total reward magnitude in acquisition and duration of extinction trials were confounded. The present experiments were designed to test the effects of these factors in factorial designs. Experiment 1 varied reward magnitude (900, 300, or 100 s of water access per trial) and amount of acquisition training (5 or 15 daily trials). With total amount of water access equated in acquisition, extinction with large rewards was faster (longer latencies in 900/5 than 300/15), but with total amount of training equated, extinction with small rewards was faster (longer latencies in 100/15 than 300/15). Experiment 2 varied reward magnitude (1200 or 120 s of water access per trial) while holding constant the number of acquisition trials (5 daily trials) and the duration of extinction trials (300 s). Extinction performance was lower with small, rather than large reward magnitude (longer latencies in 120/300 than in 1200/300). Thus, instrumental extinction depends upon the amount of time toads are exposed to the empty goal compartment during extinction trials.

  6. Helping Students Interpret Large-Scale Data Tables

    ERIC Educational Resources Information Center

    Prodromou, Theodosia

    2016-01-01

    New technologies have completely altered the ways that citizens can access data. Indeed, emerging online data sources give citizens access to an enormous amount of numerical information that provides new sorts of evidence used to influence public opinion. In this new environment, two trends have had a significant impact on our increasingly…

  7. Stakeholder engagement and feedback efforts to increase use of the iCSS ToxCast Dashboard (SETAC)

    EPA Science Inventory

    In the era of ‘Big Data’ research, many government agencies are engaged in generating and making public large amounts of data that underly both research and regulatory decisions. Public access increases the ‘democratization’ of science by enhancing transparency and access. Howev...

  8. ASK-LDT 2.0: A Web-Based Graphical Tool for Authoring Learning Designs

    ERIC Educational Resources Information Center

    Zervas, Panagiotis; Fragkos, Konstantinos; Sampson, Demetrios G.

    2013-01-01

    During the last decade, Open Educational Resources (OERs) have gained increased attention for their potential to support open access, sharing and reuse of digital educational resources. Therefore, a large amount of digital educational resources have become available worldwide through web-based open access repositories which are referred to as…

  9. Accessible ecology: Synthesis of the long, deep, and broad

    USDA-ARS?s Scientific Manuscript database

    Dramatic changes in climate, land cover, and habitat availability have occurred over the past several centuries to influence every ecosystem on Earth. Large amounts of data have been collected to document changes. Solutions to these environmental problems have been more elusive, in large part becaus...

  10. Distributed Processing of Projections of Large Datasets: A Preliminary Study

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    Modern information needs have resulted in very large amounts of data being used in geographic information systems. Problems arise when trying to project these data in a reasonable amount of time and accuracy, however. Current single-threaded methods can suffer from two problems: fast projection with poor accuracy, or accurate projection with long processing time. A possible solution may be to combine accurate interpolation methods and distributed processing algorithms to quickly and accurately convert digital geospatial data between coordinate systems. Modern technology has made it possible to construct systems, such as Beowulf clusters, for a low cost and provide access to supercomputer-class technology. Combining these techniques may result in the ability to use large amounts of geographic data in time-critical situations.

  11. Improving Decisions with Data

    ERIC Educational Resources Information Center

    Johnson, Doug

    2004-01-01

    Schools gather, store and use an increasingly large amount of data. Keeping track of everything from bus routes to building access codes to test scores to sports equipment is done with the help of electronic database programs. Large databases designed for budgeting and student record keeping have long been an integral part of the educational…

  12. Searching for New Double Stars with a Computer

    NASA Astrophysics Data System (ADS)

    Bryant, T. V.

    2015-04-01

    The advent of computers with large amounts of RAM memory and fast processors, as well as easy internet access to large online astronomical databases, has made computer searches based on astrometric data practicable for most researchers. This paper describes one such search that has uncovered hitherto unrecognized double stars.

  13. An SQL query generator for CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Chirica, Laurian

    1990-01-01

    As expert systems become more widely used, their access to large amounts of external information becomes increasingly important. This information exists in several forms such as statistical, tabular data, knowledge gained by experts and large databases of information maintained by companies. Because many expert systems, including CLIPS, do not provide access to this external information, much of the usefulness of expert systems is left untapped. The scope of this paper is to describe a database extension for the CLIPS expert system shell. The current industry standard database language is SQL. Due to SQL standardization, large amounts of information stored on various computers, potentially at different locations, will be more easily accessible. Expert systems should be able to directly access these existing databases rather than requiring information to be re-entered into the expert system environment. The ORACLE relational database management system (RDBMS) was used to provide a database connection within the CLIPS environment. To facilitate relational database access a query generation system was developed as a CLIPS user function. The queries are entered in a CLlPS-like syntax and are passed to the query generator, which constructs and submits for execution, an SQL query to the ORACLE RDBMS. The query results are asserted as CLIPS facts. The query generator was developed primarily for use within the ICADS project (Intelligent Computer Aided Design System) currently being developed by the CAD Research Unit in the California Polytechnic State University (Cal Poly). In ICADS, there are several parallel or distributed expert systems accessing a common knowledge base of facts. Expert system has a narrow domain of interest and therefore needs only certain portions of the information. The query generator provides a common method of accessing this information and allows the expert system to specify what data is needed without specifying how to retrieve it.

  14. DEFINING THE CHEMICAL SPACE OF PUBLIC GENOMIC DATA.

    EPA Science Inventory

    The pharmaceutical industry has demonstrated success in integrating of chemogenomic knowledge into predictive toxicological models, due in part to industry's access to large amounts of proprietary and commercial reference genomic data sets.

  15. Planetary Surface Visualization and Analytics

    NASA Astrophysics Data System (ADS)

    Law, E. S.; Solar System Treks Team

    2018-04-01

    An introduction and update of the Solar System Treks Project which provides a suite of interactive visualization and analysis tools to enable users (engineers, scientists, public) to access large amounts of mapped planetary data products.

  16. Genomics Portals: integrative web-platform for mining genomics data.

    PubMed

    Shinde, Kaustubh; Phatak, Mukta; Johannes, Freudenberg M; Chen, Jing; Li, Qian; Vineet, Joshi K; Hu, Zhen; Ghosh, Krishnendu; Meller, Jaroslaw; Medvedovic, Mario

    2010-01-13

    A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org.

  17. Genomics Portals: integrative web-platform for mining genomics data

    PubMed Central

    2010-01-01

    Background A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Results Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. Conclusion The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org. PMID:20070909

  18. Cloud radiative effects and changes simulated by the Coupled Model Intercomparison Project Phase 5 models

    NASA Astrophysics Data System (ADS)

    Shin, Sun-Hee; Kim, Ok-Yeon; Kim, Dongmin; Lee, Myong-In

    2017-07-01

    Using 32 CMIP5 (Coupled Model Intercomparison Project Phase 5) models, this study examines the veracity in the simulation of cloud amount and their radiative effects (CREs) in the historical run driven by observed external radiative forcing for 1850-2005, and their future changes in the RCP (Representative Concentration Pathway) 4.5 scenario runs for 2006-2100. Validation metrics for the historical run are designed to examine the accuracy in the representation of spatial patterns for climatological mean, and annual and interannual variations of clouds and CREs. The models show large spread in the simulation of cloud amounts, specifically in the low cloud amount. The observed relationship between cloud amount and the controlling large-scale environment are also reproduced diversely by various models. Based on the validation metrics, four models—ACCESS1.0, ACCESS1.3, HadGEM2-CC, and HadGEM2-ES—are selected as best models, and the average of the four models performs more skillfully than the multimodel ensemble average. All models project global-mean SST warming at the increase of the greenhouse gases, but the magnitude varies across the simulations between 1 and 2 K, which is largely attributable to the difference in the change of cloud amount and distribution. The models that simulate more SST warming show a greater increase in the net CRE due to reduced low cloud and increased incoming shortwave radiation, particularly over the regions of marine boundary layer in the subtropics. Selected best-performing models project a significant reduction in global-mean cloud amount of about -0.99% K-1 and net radiative warming of 0.46 W m-2 K-1, suggesting a role of positive feedback to global warming.

  19. Online Pornography--Should Schools Be Teaching Young People about the Risks? An Exploration of the Views of Young People and Teaching Professionals

    ERIC Educational Resources Information Center

    Baker, Karen Elizabeth

    2016-01-01

    The Internet has made sexually explicit media more accessible to young people. Online pornography is diverse, can be very graphic, and a large amount is available free of charge with restrictions varying by country. Many young people are accessing online pornography, intentionally or unintentionally, and there are fears that this could impact on…

  20. Rising Expectations: Access to Biomedical Information

    PubMed Central

    Lindberg, D. A. B.; Humphreys, B. L.

    2008-01-01

    Summary Objective To provide an overview of the expansion in public access to electronic biomedical information over the past two decades, with an emphasis on developments to which the U.S. National Library of Medicine contributed. Methods Review of the increasingly broad spectrum of web-accessible genomic data, biomedical literature, consumer health information, clinical trials data, and images. Results The amount of publicly available electronic biomedical information has increased dramatically over the past twenty years. Rising expectations regarding access to biomedical information were stimulated by the spread of the Internet, the World Wide Web, advanced searching and linking techniques. These informatics advances simplified and improved access to electronic information and reduced costs, which enabled inter-organizational collaborations to build and maintain large international information resources and also aided outreach and education efforts The demonstrated benefits of free access to electronic biomedical information encouraged the development of public policies that further increase the amount of information available. Conclusions Continuing rapid growth of publicly accessible electronic biomedical information presents tremendous opportunities and challenges, including the need to ensure uninterrupted access during disasters or emergencies and to manage digital resources so they remain available for future generations. PMID:18587496

  1. Chinese-American Parents' Perspectives about Using the Internet to Access Information for Children with Special Needs

    ERIC Educational Resources Information Center

    Zeng, Songtian; Cheatham, Gregory A.

    2017-01-01

    As the Internet contains large amounts of health- and education-related information, it provides a potentially efficient and affordable format for directly reaching a large number of families with evidence-based health- and education-related information for their children with disabilities. Little is known, however, about Internet…

  2. Using Statistical Techniques and Web Search to Correct ESL Errors

    ERIC Educational Resources Information Center

    Gamon, Michael; Leacock, Claudia; Brockett, Chris; Dolan, William B.; Gao, Jianfeng; Belenko, Dmitriy; Klementiev, Alexandre

    2009-01-01

    In this paper we present a system for automatic correction of errors made by learners of English. The system has two novel aspects. First, machine-learned classifiers trained on large amounts of native data and a very large language model are combined to optimize the precision of suggested corrections. Second, the user can access real-life web…

  3. Integrated Data Capturing Requirements for 3d Semantic Modelling of Cultural Heritage: the Inception Protocol

    NASA Astrophysics Data System (ADS)

    Di Giulio, R.; Maietti, F.; Piaia, E.; Medici, M.; Ferrari, F.; Turillazzi, B.

    2017-02-01

    The generation of high quality 3D models can be still very time-consuming and expensive, and the outcome of digital reconstructions is frequently provided in formats that are not interoperable, and therefore cannot be easily accessed. This challenge is even more crucial for complex architectures and large heritage sites, which involve a large amount of data to be acquired, managed and enriched by metadata. In this framework, the ongoing EU funded project INCEPTION - Inclusive Cultural Heritage in Europe through 3D semantic modelling proposes a workflow aimed at the achievements of efficient 3D digitization methods, post-processing tools for an enriched semantic modelling, web-based solutions and applications to ensure a wide access to experts and non-experts. In order to face these challenges and to start solving the issue of the large amount of captured data and time-consuming processes in the production of 3D digital models, an Optimized Data Acquisition Protocol (DAP) has been set up. The purpose is to guide the processes of digitization of cultural heritage, respecting needs, requirements and specificities of cultural assets.

  4. Tools to Manage and Access the NOMAD Data

    NASA Astrophysics Data System (ADS)

    Trompet, L.; Vandaele, A. C.; Thomas, I. R.

    2018-04-01

    The NOMAD instrument on-board the ExoMars spacecraft will generate a large amount of data of the atmosphere of Mars. The Planetary Aeronomy Division at IASB is willing to make their tools and these data available to the whole planetary science community.

  5. Geophysical data base

    NASA Technical Reports Server (NTRS)

    Williamson, M. R.; Kirschner, L. R.

    1975-01-01

    A general data-management system that provides a random-access capability for large amounts of data is described. The system operates on a CDC 6400 computer using a combination of magnetic tape and disk storage. A FORTRAN subroutine package is provided to simplify the maintenance and use of the data.

  6. Effects of consumption of choline and lecithin on neurological and cardiovascular systems.

    PubMed

    Wood, J L; Allison, R G

    1982-12-01

    This report concerns possible adverse health effects and benefits that might result from consumption of large amounts of choline, lecithin, or phosphatidylcholine. Indications from preliminary investigations that administration of choline or lecithin might alleviate some neurological disturbances, prevent hypercholesteremia and atherosclerosis, and restore memory and cognition have resulted in much research and public interest. Symptoms of tardive dyskinesia and Alzheimer's disease have been ameliorated in some patients and varied responses have been observed in the treatment of Gilles de la Tourette's disease, Friedreich's ataxia, levodopa-induced dyskinesia, mania, Huntington's disease, and myasthenic syndrome. Further clinical trials, especially in conjunction with cholinergic drugs, are considered worthwhile but will require sufficient amounts of pure phosphatidylcholine. The public has access to large amounts of commercial lecithin. Because high intakes of lecithin or choline produce acute gastrointestinal distress, sweating, salivation, and anorexia, it is improbable that individuals will incur lasting health hazards from self-administration of either compound. Development of depression or supersensitivity of dopamine receptors and disturbance of the cholinergic-dopaminergic-serotinergic balance is a concern with prolonged, repeated intakes of large amounts of lecithin.

  7. Health burden from peat wildfire in North Carolina

    EPA Science Inventory

    In June 2008, a wildfire smoldering through rich peat deposits in the Pocosin Lakes National Wildlife Refuge produced massive amounts of smoke and exposed a largely rural North Carolina area to air pollution in access of the National Ambient Air Quality Standards. In this talk, w...

  8. Measurement-Driven Characterization of the Mobile Environment

    ERIC Educational Resources Information Center

    Soroush, Hamed

    2013-01-01

    The concurrent deployment of high-quality wireless networks and large-scale cloud services offers the promise of secure ubiquitous access to seemingly limitless amount of content. However, as users' expectations have grown more demanding, the performance and connectivity failures endemic to the existing networking infrastructure have become more…

  9. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  10. User-Centered Indexing for Adaptive Information Access

    NASA Technical Reports Server (NTRS)

    Chen, James R.; Mathe, Nathalie

    1996-01-01

    We are focusing on information access tasks characterized by large volume of hypermedia connected technical documents, a need for rapid and effective access to familiar information, and long-term interaction with evolving information. The problem for technical users is to build and maintain a personalized task-oriented model of the information to quickly access relevant information. We propose a solution which provides user-centered adaptive information retrieval and navigation. This solution supports users in customizing information access over time. It is complementary to information discovery methods which provide access to new information, since it lets users customize future access to previously found information. It relies on a technique, called Adaptive Relevance Network, which creates and maintains a complex indexing structure to represent personal user's information access maps organized by concepts. This technique is integrated within the Adaptive HyperMan system, which helps NASA Space Shuttle flight controllers organize and access large amount of information. It allows users to select and mark any part of a document as interesting, and to index that part with user-defined concepts. Users can then do subsequent retrieval of marked portions of documents. This functionality allows users to define and access personal collections of information, which are dynamically computed. The system also supports collaborative review by letting users share group access maps. The adaptive relevance network provides long-term adaptation based both on usage and on explicit user input. The indexing structure is dynamic and evolves over time. Leading and generalization support flexible retrieval of information under similar concepts. The network is geared towards more recent information access, and automatically manages its size in order to maintain rapid access when scaling up to large hypermedia space. We present results of simulated learning experiments.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Lingda; Hayes, Ari; Song, Shuaiwen

    Modern GPUs employ cache to improve memory system efficiency. However, large amount of cache space is underutilized due to irregular memory accesses and poor spatial locality which exhibited commonly in GPU applications. Our experiments show that using smaller cache lines could improve cache space utilization, but it also frequently suffers from significant performance loss by introducing large amount of extra cache requests. In this work, we propose a novel cache design named tag-split cache (TSC) that enables fine-grained cache storage to address the problem of cache space underutilization while keeping memory request number unchanged. TSC divides tag into two partsmore » to reduce storage overhead, and it supports multiple cache line replacement in one cycle.« less

  12. Genetic and phenological variation of tocochromanol (vitamin E) content in wild (Daucus carota L. var. carota) and domesticated carrot (D. carota L. var. sativa)

    PubMed Central

    Luby, Claire H; Maeda, Hiroshi A; Goldman, Irwin L

    2014-01-01

    Carrot roots (Daucus carota L. var. sativa) produce tocochromanol compounds, collectively known as vitamin E. However, little is known about their types and amounts. Here we determined the range and variation in types and amounts of tocochromanols in a variety of cultivated carrot accessions throughout carrot postharvest storage and reproductive stages and in wild-type roots (Daucus carota L. var. carota). Of eight possible tocochromanol compounds, we detected and quantified α-, and the combined peak for β- and γ- forms of tocopherols and tocotrienols. Significant variation in amounts of tocochromanol compounds was observed across accessions and over time. Large increases in α-tocopherol were noted during both reproductive growth and the postharvest stages. The variation of tocochromanols in carrot root tissue provides useful information for future research seeking to understand the role of these compounds in carrot root tissue or to breed varieties with increased levels of these compounds. PMID:26504534

  13. The Information System at CeSAM

    NASA Astrophysics Data System (ADS)

    Agneray, F.; Gimenez, S.; Moreau, C.; Roehlly, Y.

    2012-09-01

    Modern large observational programmes produce important amounts of data from various origins, and need high level quality control, fast data access via easy-to-use graphic interfaces, as well as possibility to cross-correlate informations coming from different observations. The Centre de donnéeS Astrophysique de Marseille (CeSAM) offer web access to VO compliant Information Systems to access data of different projects (VVDS, HeDAM, EXODAT, HST-COSMOS,…), including ancillary data obtained outside Laboratoire d'Astrophysique de Marseille (LAM) control. The CeSAM Information Systems provides download of catalogues and some additional services like: search, extract and display imaging and spectroscopic data by multi-criteria and Cone Search interfaces.

  14. The Convergence of Information Technology, Data, and Management in a Library Imaging Program

    ERIC Educational Resources Information Center

    France, Fenella G.; Emery, Doug; Toth, Michael B.

    2010-01-01

    Integrating advanced imaging and processing capabilities in libraries, archives, and museums requires effective systems and information management to ensure that the large amounts of digital data about cultural artifacts can be readily acquired, stored, archived, accessed, processed, and linked to other data. The Library of Congress is developing…

  15. A mass storage system for supercomputers based on Unix

    NASA Technical Reports Server (NTRS)

    Richards, J.; Kummell, T.; Zarlengo, D. G.

    1988-01-01

    The authors present the design, implementation, and utilization of a large mass storage subsystem (MSS) for the numerical aerodynamics simulation. The MSS supports a large networked, multivendor Unix-based supercomputing facility. The MSS at Ames Research Center provides all processors on the numerical aerodynamics system processing network, from workstations to supercomputers, the ability to store large amounts of data in a highly accessible, long-term repository. The MSS uses Unix System V and is capable of storing hundreds of thousands of files ranging from a few bytes to 2 Gb in size.

  16. Analytics to Better Interpret and Use Large Amounts of Heterogeneous Data

    NASA Astrophysics Data System (ADS)

    Mathews, T. J.; Baskin, W. E.; Rinsland, P. L.

    2014-12-01

    Data scientists at NASA's Atmospheric Science Data Center (ASDC) are seasoned software application developers who have worked with the creation, archival, and distribution of large datasets (multiple terabytes and larger). In order for ASDC data scientists to effectively implement the most efficient processes for cataloging and organizing data access applications, they must be intimately familiar with data contained in the datasets with which they are working. Key technologies that are critical components to the background of ASDC data scientists include: large RBMSs (relational database management systems) and NoSQL databases; web services; service-oriented architectures; structured and unstructured data access; as well as processing algorithms. However, as prices of data storage and processing decrease, sources of data increase, and technologies advance - granting more people to access to data at real or near-real time - data scientists are being pressured to accelerate their ability to identify and analyze vast amounts of data. With existing tools this is becoming exceedingly more challenging to accomplish. For example, NASA Earth Science Data and Information System (ESDIS) alone grew from having just over 4PBs of data in 2009 to nearly 6PBs of data in 2011. This amount then increased to roughly10PBs of data in 2013. With data from at least ten new missions to be added to the ESDIS holdings by 2017, the current volume will continue to grow exponentially and drive the need to be able to analyze more data even faster. Though there are many highly efficient, off-the-shelf analytics tools available, these tools mainly cater towards business data, which is predominantly unstructured. Inadvertently, there are very few known analytics tools that interface well to archived Earth science data, which is predominantly heterogeneous and structured. This presentation will identify use cases for data analytics from an Earth science perspective in order to begin to identify specific tools that may be able to address those challenges.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnett, R.A.

    A major goal of the Analysis of Large Data Sets (ALDS) research project at Pacific Northwest Laboratory (PNL) is to provide efficient data organization, storage, and access capabilities for statistical applications involving large amounts of data. As part of the effort to achieve this goal, a self-describing binary (SDB) data file structure has been designed and implemented together with a set of basic data manipulation functions and supporting SDB data access routines. Logical and physical data descriptors are stored in SDB files preceding the data values. SDB files thus provide a common data representation for interfacing diverse software components. Thismore » paper describes the various types of data descriptors and data structures permitted by the file design. Data buffering, file segmentation, and a segment overflow handler are also discussed.« less

  18. Content-Based Medical Image Retrieval

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Deserno, Thomas M.

    This chapter details the necessity for alternative access concepts to the currently mainly text-based methods in medical information retrieval. This need is partly due to the large amount of visual data produced, the increasing variety of medical imaging data and changing user patterns. The stored visual data contain large amounts of unused information that, if well exploited, can help diagnosis, teaching and research. The chapter briefly reviews the history of image retrieval and its general methods before technologies that have been developed in the medical domain are focussed. We also discuss evaluation of medical content-based image retrieval (CBIR) systems and conclude with pointing out their strengths, gaps, and further developments. As examples, the MedGIFT project and the Image Retrieval in Medical Applications (IRMA) framework are presented.

  19. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    PubMed Central

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  20. Young people, money, and access to tobacco.

    PubMed

    Wong, Grace; Glover, Marewa; Nosa, Vili; Freeman, Becky; Paynter, Janine; Scragg, Robert

    2007-12-14

    The social and family processes involved in children's sources and use of money in relation to buying cigarettes are not well understood. Hence this study investigated how Maori, Pacific Island, European, and Asian school students access cigarettes, with a special focus on their disposable income. Students aged 11-15 years, recruited through schools, participated in 12 focus groups run by ethnically matched senior student facilitators and researchers. Topics discussed included sources of student money, parental monitoring of the use of money and student access to cigarettes. Students reported that young people can easily buy cigarettes from tobacco retailers. They could also be bought cheaply (50 cents for a roll-your-own) and/or on an "I owe you" basis from friends and social suppliers. Students used money from family, and money that was earned, "scabbed", and borrowed from friends. Cigarettes were also obtained freely from family members or from adults on the street. Whilst parents monitored students' use of large amounts of money, participants experienced relative freedom to spend small amounts which they saved out of money provided by parents for lunches and other purposes. Students were open to parental advice on how to use money but felt they should have the final say. Cigarettes continue to be accessible to children free or at affordable prices. Adults and family members must be discouraged from supplying cigarettes to children. Parents should be made aware of the way children use small amounts of money and advised to monitor, educate, and guide them to discourage cigarette purchase.

  1. Professional Development Of Junior Full Time Support Aerospace Maintenance Duty Officers

    DTIC Science & Technology

    2017-12-01

    management information system NAMP naval aviation maintenance program OCS officer candidate school OOMA optimized organizational maintenance activity...retrieval of information is effective and efficient. 13 Knowledge management solutions broadly fall into two categories, enterprise solutions...designed to manage large amounts of knowledge and information , access by many concurrent users at multiple organization units and locations, and

  2. Does Time-on-Task Estimation Matter? Implications for the Validity of Learning Analytics Findings

    ERIC Educational Resources Information Center

    Kovanovic, Vitomir; Gaševic, Dragan; Dawson, Shane; Joksimovic, Srecko; Baker, Ryan S.; Hatala, Marek

    2015-01-01

    With\twidespread adoption of Learning Management Systems (LMS) and other learning technology, large amounts of data--commonly known as trace data--are readily accessible to researchers. Trace data has been extensively used to calculate time that students spend on different learning activities--typically referred to as time-on-task. These measures…

  3. Gigwa-Genotype investigator for genome-wide analyses.

    PubMed

    Sempéré, Guilhem; Philippe, Florian; Dereeper, Alexis; Ruiz, Manuel; Sarah, Gautier; Larmande, Pierre

    2016-06-06

    Exploring the structure of genomes and analyzing their evolution is essential to understanding the ecological adaptation of organisms. However, with the large amounts of data being produced by next-generation sequencing, computational challenges arise in terms of storage, search, sharing, analysis and visualization. This is particularly true with regards to studies of genomic variation, which are currently lacking scalable and user-friendly data exploration solutions. Here we present Gigwa, a web-based tool that provides an easy and intuitive way to explore large amounts of genotyping data by filtering it not only on the basis of variant features, including functional annotations, but also on genotype patterns. The data storage relies on MongoDB, which offers good scalability properties. Gigwa can handle multiple databases and may be deployed in either single- or multi-user mode. In addition, it provides a wide range of popular export formats. The Gigwa application is suitable for managing large amounts of genomic variation data. Its user-friendly web interface makes such processing widely accessible. It can either be simply deployed on a workstation or be used to provide a shared data portal for a given community of researchers.

  4. Impact of Medicare on the Use of Medical Services by Disabled Beneficiaries, 1972-1974

    PubMed Central

    Deacon, Ronald W.

    1979-01-01

    The extension of Medicare coverage in 1973 to disabled persons receiving cash benefits under the Social Security Act provided an opportunity to examine the impact of health insurance coverage on utilization and expenses for Part B services. Data on medical services used both before and after coverage, collected through the Current Medicare Survey, were analyzed. Results indicate that access to care (as measured by the number of persons using services) increased slightly, while the rate of use did not. The large increase in the number of persons eligible for Medicare reflected the large increase in the number of cash beneficiaries. Significant increases also were found in the amount charged for medical services. The absence of large increases in access and service use may be attributed, in part, to the already existing source of third party payment available to disabled cash beneficiaries in 1972, before Medicare coverage. PMID:10316939

  5. [Inequities in access to food stamps and meal vouchers in Brazil: an analysis of the Brazilian Household Budgets Survey, 2008-2009].

    PubMed

    Canella, Daniela Silva; Martins, Ana Paula Bortoletto; Bandoni, Daniel Henrique

    2016-03-01

    Food stamps and meal vouchers can determine workers' dietary choices. The study aimed to assess the coverage of these benefits in Brazil and their distribution according to the beneficiaries' socio-demographic and regional characteristics, using data from the Brazilian Household Budgets Survey, 2008-2009. Eligibility criteria were having an occupation and a private or government job, including domestic or temporary work in rural areas. Only 3.2% of eligible individuals reported receiving such benefits. Highest coverage rates were verified with the Southeast region, urban areas, male gender, employment in the private sector, and monthly earnings > five times the minimum wage. The mean monthly amount of such benefits was R$ 177.20 (US$ 100 at the 2009 exchange rate). After adjusting for other variables, the highest amounts were associated with male gender, higher salaries, the Northeast and Central regions, and employment in the public sector. This first analysis of the national coverage of food stamps and meal vouchers showed that a large share of Brazilian workers lack access or have unequal access to such benefits.

  6. Conceptualizing recovery capital: expansion of a theoretical construct.

    PubMed

    Cloud, William; Granfield, Robert

    2008-01-01

    In order to capture key personal and social resources individuals are able to access in their efforts to overcome substance misuse, we introduced the construct of recovery capital into the literature. The purpose of this paper is to further explore the construct and include discussions of implications unexplored in our previous writings. In this paper we reveal the relationship between access to large amounts of recovery capital and substance misuse maintenance and introduce the concept of negative recovery capital. In doing so, we examine the relationships between negative recovery capital and gender, age, health, mental health, and incarceration.

  7. Security System Software

    NASA Technical Reports Server (NTRS)

    1993-01-01

    C Language Integration Production System (CLIPS), a NASA-developed expert systems program, has enabled a security systems manufacturer to design a new generation of hardware. C.CURESystem 1 Plus, manufactured by Software House, is a software based system that is used with a variety of access control hardware at installations around the world. Users can manage large amounts of information, solve unique security problems and control entry and time scheduling. CLIPS acts as an information management tool when accessed by C.CURESystem 1 Plus. It asks questions about the hardware and when given the answer, recommends possible quick solutions by non-expert persons.

  8. A Data Mining Approach to Improve Re-Accessibility and Delivery of Learning Knowledge Objects

    ERIC Educational Resources Information Center

    Sabitha, Sai; Mehrotra, Deepti; Bansal, Abhay

    2014-01-01

    Today Learning Management Systems (LMS) have become an integral part of learning mechanism of both learning institutes and industry. A Learning Object (LO) can be one of the atomic components of LMS. A large amount of research is conducted into identifying benchmarks for creating Learning Objects. Some of the major concerns associated with LO are…

  9. Computer User's Guide to the Protection of Information Resources. NIST Special Publication 500-171.

    ERIC Educational Resources Information Center

    Helsing, Cheryl; And Others

    Computers have changed the way information resources are handled. Large amounts of information are stored in one central place and can be accessed from remote locations. Users have a personal responsibility for the security of the system and the data stored in it. This document outlines the user's responsibilities and provides security and control…

  10. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Data Processing Factory for the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Stoughton, Christopher; Adelman, Jennifer; Annis, James T.; Hendry, John; Inkmann, John; Jester, Sebastian; Kent, Steven M.; Kuropatkin, Nickolai; Lee, Brian; Lin, Huan; Peoples, John, Jr.; Sparks, Robert; Tucker, Douglas; Vanden Berk, Dan; Yanny, Brian; Yocum, Dan

    2002-12-01

    The Sloan Digital Sky Survey (SDSS) data handling presents two challenges: large data volume and timely production of spectroscopic plates from imaging data. A data processing factory, using technologies both old and new, handles this flow. Distribution to end users is via disk farms, to serve corrected images and calibrated spectra, and a database, to efficiently process catalog queries. For distribution of modest amounts of data from Apache Point Observatory to Fermilab, scripts use rsync to update files, while larger data transfers are accomplished by shipping magnetic tapes commercially. All data processing pipelines are wrapped in scripts to address consecutive phases: preparation, submission, checking, and quality control. We constructed the factory by chaining these pipelines together while using an operational database to hold processed imaging catalogs. The science database catalogs all imaging and spectroscopic object, with pointers to the various external files associated with them. Diverse computing systems address particular processing phases. UNIX computers handle tape reading and writing, as well as calibration steps that require access to a large amount of data with relatively modest computational demands. Commodity CPUs process steps that require access to a limited amount of data with more demanding computations requirements. Disk servers optimized for cost per Gbyte serve terabytes of processed data, while servers optimized for disk read speed run SQLServer software to process queries on the catalogs. This factory produced data for the SDSS Early Data Release in June 2001, and it is currently producing Data Release One, scheduled for January 2003.

  12. Integration and Exposure of Large Scale Computational Resources Across the Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Maxwell, T. P.; Doutriaux, C.; Williams, D. N.; Chaudhary, A.; Ames, S.

    2015-12-01

    As the size of remote sensing observations and model output data grows, the volume of the data has become overwhelming, even to many scientific experts. As societies are forced to better understand, mitigate, and adapt to climate changes, the combination of Earth observation data and global climate model projects is crucial to not only scientists but to policy makers, downstream applications, and even the public. Scientific progress on understanding climate is critically dependent on the availability of a reliable infrastructure that promotes data access, management, and provenance. The Earth System Grid Federation (ESGF) has created such an environment for the Intergovernmental Panel on Climate Change (IPCC). ESGF provides a federated global cyber infrastructure for data access and management of model outputs generated for the IPCC Assessment Reports (AR). The current generation of the ESGF federated grid allows consumers of the data to find and download data with limited capabilities for server-side processing. Since the amount of data for future AR is expected to grow dramatically, ESGF is working on integrating server-side analytics throughout the federation. The ESGF Compute Working Team (CWT) has created a Web Processing Service (WPS) Application Programming Interface (API) to enable access scalable computational resources. The API is the exposure point to high performance computing resources across the federation. Specifically, the API allows users to execute simple operations, such as maximum, minimum, average, and anomalies, on ESGF data without having to download the data. These operations are executed at the ESGF data node site with access to large amounts of parallel computing capabilities. This presentation will highlight the WPS API, its capabilities, provide implementation details, and discuss future developments.

  13. The wireless networking system of Earthquake precursor mobile field observation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Teng, Y.; Wang, X.; Fan, X.; Wang, X.

    2012-12-01

    The mobile field observation network could be real-time, reliably record and transmit large amounts of data, strengthen the physical signal observations in specific regions and specific period, it can improve the monitoring capacity and abnormal tracking capability. According to the features of scatter everywhere, a large number of current earthquake precursor observation measuring points, networking technology is based on wireless broadband accessing McWILL system, the communication system of earthquake precursor mobile field observation would real-time, reliably transmit large amounts of data to the monitoring center from measuring points through the connection about equipment and wireless accessing system, broadband wireless access system and precursor mobile observation management center system, thereby implementing remote instrument monitoring and data transmition. At present, the earthquake precursor field mobile observation network technology has been applied to fluxgate magnetometer array geomagnetic observations of Tianzhu, Xichang,and Xinjiang, it can be real-time monitoring the working status of the observational instruments of large area laid after the last two or three years, large scale field operation. Therefore, it can get geomagnetic field data of the local refinement regions and provide high-quality observational data for impending earthquake tracking forecast. Although, wireless networking technology is very suitable for mobile field observation with the features of simple, flexible networking etc, it also has the phenomenon of packet loss etc when transmitting a large number of observational data due to the wireless relatively weak signal and narrow bandwidth. In view of high sampling rate instruments, this project uses data compression and effectively solves the problem of data transmission packet loss; Control commands, status data and observational data transmission use different priorities and means, which control the packet loss rate within an acceptable range and do not affect real-time observation curve. After field running test and earthquake tracking project applications, the field mobile observation wireless networking system is operate normally, various function have good operability and show good performance, the quality of data transmission meet the system design requirements and play a significant role in practical applications.

  14. Single event upset in avionics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taber, A.; Normand, E.

    1993-04-01

    Data from military/experimental flights and laboratory testing indicate that typical non radiation-hardened 64K and 256K static random access memories (SRAMs) can experience a significant soft upset rate at aircraft altitudes due to energetic neutrons created by cosmic ray interactions in the atmosphere. It is suggested that error detection and correction (EDAC) circuitry be considered for all avionics designs containing large amounts of semi-conductor memory.

  15. REUSABLE PROPULSION ARCHITECTURE FOR SUSTAINABLE LOW-COST ACCESS TO SPACE

    NASA Technical Reports Server (NTRS)

    Bonometti, J. A.; Dankanich, J. W.; Frame, K. L.

    2005-01-01

    The primary obstacle to any space-based mission is, and has always been, the cost of access to space. Even with impressive efforts toward reusability, no system has come close to lowering the cost a significant amount. It is postulated here, that architectural innovation is necessary to make reusability feasible, not incremental subsystem changes. This paper shows two architectural approaches of reusability that merit further study investments. Both #inherently# have performance increases and cost advantages to make affordable access to space a near term reality. A rocket launched from a subsonic aircraft (specifically the Crossbow methodology) and a momentum exchange tether, reboosted by electrodynamics, offer possibilities of substantial reductions in the total transportation architecture mass - making access-to-space cost-effective. They also offer intangible benefits that reduce risk or offer large growth potential. The cost analysis indicates that approximately a 50% savings is obtained using today#s aerospace materials and practices.

  16. A simple method for long-term biliary access in large animals.

    PubMed

    Andrews, J C; Knutsen, C; Smith, P; Prieskorn, D; Crudip, J; Klevering, J; Ensminger, W D

    1988-07-01

    A simple method to obtain long-term access to the biliary tree in dogs and pigs is presented. In ten dogs and four pigs, a cholecystectomy was performed, the cystic duct isolated, and a catheter inserted into the cut end of the cystic duct. The catheter was connected to a subcutaneous infusion port, producing a closed, internal system to allow long-term access. The catheter placement was successful in three of the pigs and all of the dogs. Thirty-five cholangiograms were obtained in the 13 subjects by accessing the port with a 20 gauge Huber needle and injecting small amounts (4-10 mL) of contrast under fluoroscopic control. Cholangiograms were obtained up to four months after catheter placement without evidence for catheter failure or surgically induced changes in the biliary tree. This model provides a simple, reliable means to obtain serial cholangiograms in a research setting.

  17. An Innovative, Multidisciplinary Educational Program in Interactive Information Storage and Retrieval. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Gallagher, Mary C.

    1985-01-01

    There exists a large number of large-scale bibliographic Information Storage and Retrieval Systems containing large amounts of valuable data of interest in a wide variety of research applications. These systems are not used to capacity because the end users, i.e., the researchers, have not been trained in the techniques of accessing such systems. This thesis describes the development of a transportable, university-level course in methods of querying on-line interactive Information Storage and Retrieval systems as a solution to this problem. This course was designed to instruct upper division science and engineering students to enable these end users to directly access such systems. The course is designed to be taught by instructors who are not specialists in either computer science or research skills. It is independent of any particular IS and R system or computer hardware. The project is sponsored by NASA and conducted by the University of Southwestern Louisiana and Southern University.

  18. mzDB: A File Format Using Multiple Indexing Strategies for the Efficient Analysis of Large LC-MS/MS and SWATH-MS Data Sets*

    PubMed Central

    Bouyssié, David; Dubois, Marc; Nasso, Sara; Gonzalez de Peredo, Anne; Burlet-Schiltz, Odile; Aebersold, Ruedi; Monsarrat, Bernard

    2015-01-01

    The analysis and management of MS data, especially those generated by data independent MS acquisition, exemplified by SWATH-MS, pose significant challenges for proteomics bioinformatics. The large size and vast amount of information inherent to these data sets need to be properly structured to enable an efficient and straightforward extraction of the signals used to identify specific target peptides. Standard XML based formats are not well suited to large MS data files, for example, those generated by SWATH-MS, and compromise high-throughput data processing and storing. We developed mzDB, an efficient file format for large MS data sets. It relies on the SQLite software library and consists of a standardized and portable server-less single-file database. An optimized 3D indexing approach is adopted, where the LC-MS coordinates (retention time and m/z), along with the precursor m/z for SWATH-MS data, are used to query the database for data extraction. In comparison with XML formats, mzDB saves ∼25% of storage space and improves access times by a factor of twofold up to even 2000-fold, depending on the particular data access. Similarly, mzDB shows also slightly to significantly lower access times in comparison with other formats like mz5. Both C++ and Java implementations, converting raw or XML formats to mzDB and providing access methods, will be released under permissive license. mzDB can be easily accessed by the SQLite C library and its drivers for all major languages, and browsed with existing dedicated GUIs. The mzDB described here can boost existing mass spectrometry data analysis pipelines, offering unprecedented performance in terms of efficiency, portability, compactness, and flexibility. PMID:25505153

  19. Soil erosion model predictions using parent material/soil texture-based parameters compared to using site-specific parameters

    Treesearch

    R. B. Foltz; W. J. Elliot; N. S. Wagenbrenner

    2011-01-01

    Forested areas disturbed by access roads produce large amounts of sediment. One method to predict erosion and, hence, manage forest roads is the use of physically based soil erosion models. A perceived advantage of a physically based model is that it can be parameterized at one location and applied at another location with similar soil texture or geological parent...

  20. Strategy, the Soviet Union and the 1980’s.

    DTIC Science & Technology

    1981-04-01

    American diplomatic relations, and has written and published articles on the interrelationships between detente and deterrence, the origins of the ... cocoa . In addition, the USSR needs an assured access to large amounts of fish. This is one reason why the Kremlin will be quite interested in the Law...demographers originally predicted the census would show. The growth distribution of Soviet population also remains very uneven. The Slavic nationalities

  1. Changes in the Arctic: Background and Issues for Congress

    DTIC Science & Technology

    2010-03-30

    used to support national claims to submerged lands which may contain large amounts of oil, natural gas, methane hydrates, or minerals. Expiration...developments offer opportunities for growth, they are potential sources of competition and conflict for access and natural resources.163 In a February 2009...management of Arctic natural resources and to address socioeconomic impacts of changing patterns in the use of natural resources. Changes in the Arctic

  2. Standalone Internet speech restructuring treatment for adults who stutter: A phase I study.

    PubMed

    Erickson, Shane; Block, Susan; Menzies, Ross; O'Brian, Sue; Packman, Ann; Onslow, Mark

    2016-08-01

    This Phase I trial reports the results of a clinician-free Internet speech restructuring treatment for adults who stutter. The program consists of nine phases with concepts loosely based on the Camperdown Program. Twenty adults who stutter were recruited. They were given unlimited access to the program for 6 months. Primary outcome measures were the percentage of syllables stuttered and self-reported severity ratings. Five participants accessed all phases of the program, while another five accessed more than half the phases. The remaining 10 accessed between one and four phases. Four of five participants who accessed all phases reduced their stuttering frequency by more than 50% and an additional two participants who accessed more than half the phases also achieved similar reductions. These results were confirmed by self-reports of stuttering severity. Stuttering reductions were largely commensurate with the amount of the program accessed. As with other clinician-free programs in related health areas, maintaining adherence to the program's procedures was a significant issue. Nonetheless, this novel approach to treating stuttering has the potential to be a viable alternative for some clients and may help to address the significant access and relapse issues that affect treatment provision for adults who stutter.

  3. Toward a Virtual Solar Observatory: Starting Before the Petabytes Fall

    NASA Technical Reports Server (NTRS)

    Gurman, Joseph; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    Although a few, large, space- and groundbased solar physics databases exist at selected locations, there is as yet only limited standardization or interoperability. I describe the outline of a plan to facilitate access to a distributed network of online solar data archives, both large and small. The underlying principle is that the user need not know where- the data are, only how to specify which data are desired. At the least, such an approach could considerably simplify the scientific user's access to the enormous amount of solar physics data to be obtained in the next decade. At best, it might mean the withering away of traditional data centers, and all the bureaucracy they entail. This work is supported by the Sun-Earth Connections Division of NASA Office of Space Science, thanks to an anomalous act of largess on the part of the 2001 SEC Senior Review.

  4. Toward a Virtual Solar Observatory: Starting Before the Petabytes Fall

    NASA Astrophysics Data System (ADS)

    Gurman, J. B.

    2001-12-01

    Although a few, large, space- and groundbased solar physics databases exist at selected locations, there is as yet only limited standardization or interoperability. I describe the outline of a plan to facilitate access to a distributed network of online solar data archives, both large and small. The underlying principle is that the user need not know where the data are, only how to specify which data are desired. At the least, such an approach could considerably simplify the scientific user's access to the enormous amount of solar physics data to be obtained in the next decade. At best, it might mean the withering away of traditional data centers, and all the bureaucracy they entail. This work is supported by the Sun-Earth Connections Division of NASA Office of Space Science, thanks to an anomalous act of largess on the part of the 2001 SEC Senior Review.

  5. How restrained eaters perceive the amount they eat.

    PubMed

    Jansen, A

    1996-09-01

    The cognitive model of binge eating states that it is the awareness of a broken diet that disinhibits the restrained eater. It is, according to that model, the perception of having overeaten that triggers disinhibited eating. However, although the perception of the amount eaten plays a central role in cognitive restraint theory, it has never directly been tested how restrained subjects perceive the amount of food they eat. In the present studies, participants were given ad libitum access to large amounts of palatable food and both their perception of the amount eaten and their estimated caloric intake were compared with the amount they actually ate. The restrained participants in these studies ate more than the unrestrained participants. In the first and second studies, the restrained participants consumed 571 and 372 'forbidden' calories respectively, without having the feeling that they had eaten very much, let alone too much. Moreover in both studies, the restrained eaters underestimated their caloric intake, whereas unrestrained eaters estimated their caloric intake quite well. The potential implications of the present findings for the cognitive restraint model are discussed.

  6. Totally Connected Healthcare with TV White Spaces.

    PubMed

    Katzis, Konstantinos; Jones, Richard W; Despotou, Georgios

    2017-01-01

    Recent technological advances in electronics, wireless communications and low cost medical sensors generated a plethora of Wearable Medical Devices (WMDs), which are capable of generating considerably large amounts of new, unstructured real-time data. This contribution outlines how this data can be propagated to a healthcare system through the internet, using long distance Radio Access Networks (RANs) and proposes a novel communication system architecture employing White Space Devices (WSD) to provide seamless connectivity to its users. Initial findings indicate that the proposed communication system can facilitate broadband services over a large geographical area taking advantage of the freely available TV White Spaces (TVWS).

  7. Flexible Method for Inter-object Communication in C++

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Gould, Jack J.

    1994-01-01

    A method has been developed for organizing and sharing large amounts of information between objects in C++ code. This method uses a set of object classes to define variables and group them into tables. The variable tables presented here provide a convenient way of defining and cataloging data, as well as a user-friendly input/output system, a standardized set of access functions, mechanisms for ensuring data integrity, methods for interprocessor data transfer, and an interpretive language for programming relationships between parameters. The object-oriented nature of these variable tables enables the use of multiple data types, each with unique attributes and behavior. Because each variable provides its own access methods, redundant table lookup functions can be bypassed, thus decreasing access times while maintaining data integrity. In addition, a method for automatic reference counting was developed to manage memory safely.

  8. Striped tertiary storage arrays

    NASA Technical Reports Server (NTRS)

    Drapeau, Ann L.

    1993-01-01

    Data stripping is a technique for increasing the throughput and reducing the response time of large access to a storage system. In striped magnetic or optical disk arrays, a single file is striped or interleaved across several disks; in a striped tape system, files are interleaved across tape cartridges. Because a striped file can be accessed by several disk drives or tape recorders in parallel, the sustained bandwidth to the file is greater than in non-striped systems, where access to the file are restricted to a single device. It is argued that applying striping to tertiary storage systems will provide needed performance and reliability benefits. The performance benefits of striping for applications using large tertiary storage systems is discussed. It will introduce commonly available tape drives and libraries, and discuss their performance limitations, especially focusing on the long latency of tape accesses. This section will also describe an event-driven tertiary storage array simulator that is being used to understand the best ways of configuring these storage arrays. The reliability problems of magnetic tape devices are discussed, and plans for modeling the overall reliability of striped tertiary storage arrays to identify the amount of error correction required are described. Finally, work being done by other members of the Sequoia group to address latency of accesses, optimizing tertiary storage arrays that perform mostly writes, and compression is discussed.

  9. 76 FR 37779 - Rural Broadband Access Loans and Loan Guarantees Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-28

    ... DEPARTMENT OF AGRICULTURE Rural Utilities Service Rural Broadband Access Loans and Loan Guarantees... of $325,663,157 in loan funds for the Rural Broadband Access Loans and Loan Guarantees Program for... identifying a definite funding amount. The maximum amount of a loan under this authority will be $75 million...

  10. Advanced Technologies in Safe and Efficient Operating Rooms

    DTIC Science & Technology

    2008-02-01

    of team leader) o a learning environment (where humans play the role of students ). As can be seen, this work is at the confluence of several lines... Abstract Routine clinical information systems now have the ability to gather large amounts of data that surgical managers can access to create a...project is to create a computer system for teaching medical students cognitive skills of an attending physician related to diagnosing and treating

  11. Cloud-Based Distributed Control of Unmanned Systems

    DTIC Science & Technology

    2015-04-01

    during mission execution. At best, the data is saved onto hard-drives and is accessible only by the local team. Data history in a form available and...following open source technologies: GeoServer, OpenLayers, PostgreSQL , and PostGIS are chosen to implement the back-end database and server. A brief...geospatial map data. 3. PostgreSQL : An SQL-compliant object-relational database that easily scales to accommodate large amounts of data - upwards to

  12. Direct Observations of Fracture and the Damage Mechanics of Ceramics

    DTIC Science & Technology

    1988-10-31

    microplasticity up to the fracture load. d. It shculd have low enough strength in tension and compression to enable strength measurements at easily accessible...15jm. SEM examination of the grains after large amounts of deformation indicated that the grains are brittle without any evidence of microplasticity . In...and microplasticity in polycrystalline alumina", J.Mater.Sci., 12(1977)791-796. 93. J Lankford, "Compressive microfracture and indentation damage in A1

  13. Defense Spending in Latin America: Arms Race or Commodity Boom

    DTIC Science & Technology

    2008-12-01

    over their territorial and maritime border, while many Peruvians and Bolivians still hold a grudge over the immense amount of Bolivian territory lost...and difficult to understand. Some were so complicated it was difficult distinguishing them from the RANDOM strategy. Unlike in football , where one...since the War of the Pacific when Chile annexed a large portion of Peruvian land and shut off access to the sea to Bolivia. Although Chile’s national

  14. Automatic generation of reports at the TELECOM SCC

    NASA Astrophysics Data System (ADS)

    Beltan, Thierry; Jalbaud, Myriam; Fronton, Jean Francois

    In-orbit satellite follow-up produces a certain amount of reports on a regular basis (daily, weekly, quarterly, annually). Most of these documents use the information of former issues with the increments of the last period of time. They are made up of text, tables, graphs or pictures. The system presented here is the SGMT (Systeme de Gestion de la Memoire Technique), which means Technical Memory Mangement System. It provides the system operators with tools to generate the greatest part of these reports, as automatically as possible. It gives an easy access to the reports and the large amount of available memory enables the user to consult data on the complete lifetime of a satellite family.

  15. Effects of a 2014 Statewide Policy Change on Cash-Value Voucher Redemptions for Fruits/Vegetables Among Participants in the Supplemental Nutrition Program for Women, Infants, and Children (WIC).

    PubMed

    Okeke, Janice O; Ekanayake, Ruwani M; Santorelli, Melissa L

    2017-10-01

    Purpose In 2014, the New Jersey Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) began requiring WIC-authorized stores to stock at least two fresh fruits and two fresh vegetables. We aimed to evaluate the effect of this policy change on fruit and vegetable purchases among WIC-participating households and to assess variation by household access to a healthy food store such as a supermarket or large grocery store. Description Households with continuous WIC enrollment from June 2013 to May 2015 were included (n = 16,415). Participants receive monthly cash-value vouchers (CVVs) to purchase fruits and vegetables. For each household, the CVV redemption proportion was calculated for the period before and after the policy by dividing the total dollar amount redeemed by the total dollar amount issued. Complete redemption was defined as a proportion ≥90% and the change in complete redemption odds was assessed after adjusting for Supplemental Nutrition Assistance Program participation. Assessment We observed a small increase following the policy change [odds ratio (OR) 1.10, 95% confidence interval (CI) 1.04-1.17]; however, the effect varied by healthy food access (p = 0.03). The odds increased for households with access to at least one healthy food store (OR 1.13, 95% CI 1.06-1.20) while no effect was observed for households without such access (OR 0.91, 95% CI 0.76-1.10). Conclusion Policy change was associated with a small increase in purchasing, but only among households with healthy food access. The state is addressing this gap through technical assistance interventions targeting WIC-authorized small stores in communities with limited access.

  16. ENGINES: exploring single nucleotide variation in entire human genomes.

    PubMed

    Amigo, Jorge; Salas, Antonio; Phillips, Christopher

    2011-04-19

    Next generation ultra-sequencing technologies are starting to produce extensive quantities of data from entire human genome or exome sequences, and therefore new software is needed to present and analyse this vast amount of information. The 1000 Genomes project has recently released raw data for 629 complete genomes representing several human populations through their Phase I interim analysis and, although there are certain public tools available that allow exploration of these genomes, to date there is no tool that permits comprehensive population analysis of the variation catalogued by such data. We have developed a genetic variant site explorer able to retrieve data for Single Nucleotide Variation (SNVs), population by population, from entire genomes without compromising future scalability and agility. ENGINES (ENtire Genome INterface for Exploring SNVs) uses data from the 1000 Genomes Phase I to demonstrate its capacity to handle large amounts of genetic variation (>7.3 billion genotypes and 28 million SNVs), as well as deriving summary statistics of interest for medical and population genetics applications. The whole dataset is pre-processed and summarized into a data mart accessible through a web interface. The query system allows the combination and comparison of each available population sample, while searching by rs-number list, chromosome region, or genes of interest. Frequency and FST filters are available to further refine queries, while results can be visually compared with other large-scale Single Nucleotide Polymorphism (SNP) repositories such as HapMap or Perlegen. ENGINES is capable of accessing large-scale variation data repositories in a fast and comprehensive manner. It allows quick browsing of whole genome variation, while providing statistical information for each variant site such as allele frequency, heterozygosity or FST values for genetic differentiation. Access to the data mart generating scripts and to the web interface is granted from http://spsmart.cesga.es/engines.php. © 2011 Amigo et al; licensee BioMed Central Ltd.

  17. Exploiting NASA's Cumulus Earth Science Cloud Archive with Services and Computation

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Jazayeri, A.; Schuler, I.; Plofchan, P.; Baynes, K.; Ramachandran, R.

    2017-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) houses nearly 30PBs of critical Earth Science data and with upcoming missions is expected to balloon to between 200PBs-300PBs over the next seven years. In addition to the massive increase in data collected, researchers and application developers want more and faster access - enabling complex visualizations, long time-series analysis, and cross dataset research without needing to copy and manage massive amounts of data locally. NASA has started prototyping with commercial cloud providers to make this data available in elastic cloud compute environments, allowing application developers direct access to the massive EOSDIS holdings. In this talk we'll explain the principles behind the archive architecture and share our experience of dealing with large amounts of data with serverless architectures including AWS Lambda, the Elastic Container Service (ECS) for long running jobs, and why we dropped thousands of lines of code for AWS Step Functions. We'll discuss best practices and patterns for accessing and using data available in a shared object store (S3) and leveraging events and message passing for sophisticated and highly scalable processing and analysis workflows. Finally we'll share capabilities NASA and cloud services are making available on the archives to enable massively scalable analysis and computation in a variety of formats and tools.

  18. Human interface to large multimedia databases

    NASA Astrophysics Data System (ADS)

    Davis, Ben; Marks, Linn; Collins, Dave; Mack, Robert; Malkin, Peter; Nguyen, Tam

    1994-04-01

    The emergence of high-speed networking for multimedia will have the effect of turning the computer screen into a window on a very large information space. As this information space increases in size and complexity, providing users with easy and intuitive means of accessing information will become increasingly important. Providing access to large amounts of text has been the focus of work for hundreds of years and has resulted in the evolution of a set of standards, from the Dewey Decimal System for libraries to the recently proposed ANSI standards for representing information on-line: KIF, Knowledge Interchange Format, and CG's, Conceptual Graphs. Certain problems remain unsolved by these efforts, though: how to let users know the contents of the information space, so that they know whether or not they want to search it in the first place, how to facilitate browsing, and, more specifically, how to facilitate visual browsing. These issues are particularly important for users in educational contexts and have been the focus of much of our recent work. In this paper we discuss some of the solutions we have prototypes: specifically, visual means, visual browsers, and visual definitional sequences.

  19. Telemedicine with integrated data security in ATM-based networks

    NASA Astrophysics Data System (ADS)

    Thiel, Andreas; Bernarding, Johannes; Kurth, Ralf; Wenzel, Rudiger; Villringer, Arno; Tolxdorff, Thomas

    1997-05-01

    Telemedical services rely on the digital transfer of large amounts of data in a short time. The acceptance of these services requires therefore new hard- and software concepts. The fast exchange of data is well performed within a high- speed ATM-based network. The fast access to the data from different platforms imposes more difficult problems, which may be divided into those relating to standardized data formats and those relating to different levels of data security across nations. For a standardized access to the formats and those relating to different levels of data security across nations. For a standardized access to the image data, a DICOM 3.0 server was implemented.IMages were converted into the DICOM 3.0 standard if necessary. The access to the server is provided by an implementation of DICOM in JAVA allowing access to the data from different platforms. Data protection measures to ensure the secure transfer of sensitive patient data are not yet solved within the DICOM concept. We investigated different schemes to protect data using the DICOM/JAVA modality with as little impact on data transfer speed as possible.

  20. Design and Implementation of an Environmental Mercury Database for Northeastern North America

    NASA Astrophysics Data System (ADS)

    Clair, T. A.; Evers, D.; Smith, T.; Goodale, W.; Bernier, M.

    2002-12-01

    An important issue faced when attempting to interpret geochemical variability studies across large regions, is the accumulation, access and consistent display of data from a large number of sources. We were given the opportunity to provide a regional assessment of mercury distribution in surface waters, sediments, invertebrates, fish, and birds in a region extending from New York State to the Island of Newfoundland. We received over 20 individual databases from State, Provincial, and Federal governments, as well as university researchers from both Canada and the United States. These databases came in a variety of formats and sizes. Our challenge was to find a way of accumulating and presenting the large amounts of acquired data, in a consistent, easily accessible fashion, which could then be more easily interpreted. Moreover, the database had to be portable and easily distributable to the large number of study participants. We developed a static database structure using a web-based approach which we were then able to mount on a server which was accessible to all project participants. The site also contained all the necessary documentation related to the data, its acquisition, as well as the methods used in its analysis and interpretation. We then copied the complete web site on CDROM's which we then distributed to all project participants, funding agencies, and other interested parties. The CDROM formed a permanent record of the project and was issued ISSN and ISBN numbers so that the information remained accessible to researchers in perpetuity. Here we present an overview of the CDROM and data structures, of the information accumulated over the first year of the study, and initial interpretation of the results.

  1. Strategies for responding to RAC requests electronically.

    PubMed

    Schramm, Michael

    2012-04-01

    Providers that would like to respond to complex RAC reviews electronically should consider three strategies: Invest in an EHR software package or a high-powered scanner that can quickly scan large amounts of paper. Implement an audit software platform that will allow providers to manage the entire audit process in one place. Use a CONNECT-compatible gateway capable of accessing the Nationwide Health Information Network (the network on which the electronic submission of medical documentation program runs).

  2. The Computer: An Effective Research Assistant

    PubMed Central

    Gancher, Wendy

    1984-01-01

    The development of software packages such as data management systems and statistical packages has made it possible to process large amounts of research data. Data management systems make the organization and manipulation of such data easier. Floppy disks ease the problem of storing and retrieving records. Patient information can be kept confidential by limiting access to computer passwords linked with research files, or by using floppy disks. These attributes make the microcomputer essential to modern primary care research. PMID:21279042

  3. AstroVis: Visualizing astronomical data cubes

    NASA Astrophysics Data System (ADS)

    Finniss, Stephen; Tyler, Robin; Questiaux, Jacques

    2016-08-01

    AstroVis enables rapid visualization of large data files on platforms supporting the OpenGL rendering library. Radio astronomical observations are typically three dimensional and stored as data cubes. AstroVis implements a scalable approach to accessing these files using three components: a File Access Component (FAC) that reduces the impact of reading time, which speeds up access to the data; the Image Processing Component (IPC), which breaks up the data cube into smaller pieces that can be processed locally and gives a representation of the whole file; and Data Visualization, which implements an approach of Overview + Detail to reduces the dimensions of the data being worked with and the amount of memory required to store it. The result is a 3D display paired with a 2D detail display that contains a small subsection of the original file in full resolution without reducing the data in any way.

  4. LinkWinds: An Approach to Visual Data Analysis

    NASA Technical Reports Server (NTRS)

    Jacobson, Allan S.

    1992-01-01

    The Linked Windows Interactive Data System (LinkWinds) is a prototype visual data exploration and analysis system resulting from a NASA/JPL program of research into graphical methods for rapidly accessing, displaying and analyzing large multivariate multidisciplinary datasets. It is an integrated multi-application execution environment allowing the dynamic interconnection of multiple windows containing visual displays and/or controls through a data-linking paradigm. This paradigm, which results in a system much like a graphical spreadsheet, is not only a powerful method for organizing large amounts of data for analysis, but provides a highly intuitive, easy to learn user interface on top of the traditional graphical user interface.

  5. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2010-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.

  6. Advancements in Data Access at the IRIS Data Management Center to Broaden Data Use

    NASA Astrophysics Data System (ADS)

    Benson, R. B.; Trabant, C. M.; Ahern, T. K.

    2013-12-01

    The IRIS Data Management Center (DMC) has been serving digital seismic data for more than 20 years and has offered a variety of access mechanisms that have stood the test of time. However, beginning in 2010, and in response to multiple needs being requested from the IRIS DMC, we have developed web service interfaces to access our primary data repository. These new interfaces have rapidly grown in popularity. In 2013, the third full year of their operation, these services were responsible for half of all the data shipped from the DMC. In the same time period, the amount of data shipped from the other data access mechanisms has also increased. This non-linear growth of data shipments reflects the increased data usage by the research community. We believe that our new web service interfaces are well suited to fit future data access needs and signify a significant evolution in integrating different scientific data sets. Based on standardized web technologies, support for writing access software is ubiquitous. As fundamentally programmatic interfaces, the services are well suited for integration into data processing systems, in particular large-scale data processing systems. Their programmatic nature also makes then well suited for use with brokering systems where, for example, data from multiple disciplines can be integrated. In addition to providing access to raw data, the DMC created web services that apply simple, on-the-fly processing and format conversion. Processing the data (e.g. converting to Earth units) and formatting the result into something generally usable (e.g. ASCII) removes important barriers for users working in other disciplines. The end result is that we are shipping a much larger amount of data in a manner more directly usable by users. Many of these principles will be applied to the DMC's future work in the NSF's EarthCube Web Service Building Blocks project.

  7. A biomedical information system for retrieval and manipulation of NHANES data.

    PubMed

    Mukherjee, Sukrit; Martins, David; Norris, Keith C; Jenders, Robert A

    2013-01-01

    The retrieval and manipulation of data from large public databases like the U.S. National Health and Nutrition Examination Survey (NHANES) may require sophisticated statistical software and significant expertise that may be unavailable in the university setting. In response, we have developed the Data Retrieval And Manipulation System (DReAMS), an automated information system to handle all processes of data extraction and cleaning and then joining different subsets to produce analysis-ready output. The system is a browser-based data warehouse application in which the input data from flat files or operational systems are aggregated in a structured way so that the desired data can be read, recoded, queried and extracted efficiently. The current pilot implementation of the system provides access to a limited amount of NHANES database. We plan to increase the amount of data available through the system in the near future and to extend the techniques to other large databases from CDU archive with a current holding of about 53 databases.

  8. Techniques for increasing the efficiency of Earth gravity calculations for precision orbit determination

    NASA Technical Reports Server (NTRS)

    Smith, R. L.; Lyubomirsky, A. S.

    1981-01-01

    Two techniques were analyzed. The first is a representation using Chebyshev expansions in three-dimensional cells. The second technique employs a temporary file for storing the components of the nonspherical gravity force. Computer storage requirements and relative CPU time requirements are presented. The Chebyshev gravity representation can provide a significant reduction in CPU time in precision orbit calculations, but at the cost of a large amount of direct-access storage space, which is required for a global model.

  9. BREAD LOAF ROADLESS AREA, VERMONT.

    USGS Publications Warehouse

    Slack, John F.; Bitar, Richard F.

    1984-01-01

    On the basis of mineral-resource survey the Bread Loaf Roadless Area, Vermont, is considered to have probable resource potential for the occurrence of volcanogenic massive sulfide deposits of copper, zinc, and lead, particularly in the north and northeastern section of the roadless area. Nonmetallic commodities include minor deposits of sand and gravel, and abundant rock suitable for crushing. However, large amounts of these materials in more accessible locations are available outside the roadless area. A possibility exists that oil or natural gas resources may be present at great depth.

  10. Universal SaaS platform of internet of things for real-time monitoring

    NASA Astrophysics Data System (ADS)

    Liu, Tongke; Wu, Gang

    2018-04-01

    Real-time monitoring service, as a member of the IoT (Internet of Things) service, has a wide range application scenario. To support rapid construction and deployment of applications and avoid repetitive development works in these processes, this paper designs and develops a universal SaaS platform of IoT for real-time monitoring. Evaluation shows that this platform can provide SaaS service to multiple tenants and achieve high real-time performance under the situation of large amount of device access.

  11. High and escalating levels of cocaine intake are dissociable from subsequent incentive motivation for the drug in rats.

    PubMed

    Allain, Florence; Bouayad-Gervais, Karim; Samaha, Anne-Noël

    2018-01-01

    Taking high and increasing amounts of cocaine is thought to be necessary for the development of addiction. Consequently, a widely used animal model of drug self-administration involves giving animals continuous drug access during long sessions (LgA), as this produces high and escalating levels of intake. However, human cocaine addicts likely use the drug with an intermittent rather than continuous pattern, producing spiking brain cocaine levels. Using an intermittent-access (IntA) cocaine self-administration procedure in rats, we studied the relationship between escalation of cocaine intake and later incentive motivation for the drug, as measured by responding under a progressive ratio schedule of cocaine reinforcement. First, under IntA, rats escalated their cocaine use both within and between sessions. However, escalation did not predict later incentive motivation for the drug. Second, incentive motivation for cocaine was similar in IntA-rats limited to low- and non-escalating levels of drug intake (IntA-Lim) and in IntA-rats that took high and escalating levels of drug. Finally, IntA-Lim rats took much less cocaine than rats given continuous drug access during each self-administration session (LgA-rats). However, IntA-Lim rats later responded more for cocaine under a progressive ratio schedule of reinforcement. Taking large and escalating quantities of cocaine does not appear necessary to increase incentive motivation for the drug. Taking cocaine in an intermittent pattern-even in small amounts-is more effective in producing this addiction-relevant change. Thus, beyond the amount of drug taken, the temporal kinetics of drug use predict change in drug use over time.

  12. Lightweight Integrated Solar Array (LISA): Providing Higher Power to Small Spacecraft

    NASA Technical Reports Server (NTRS)

    Johnson, Les; Carr, John; Fabisinski, Leo; Lockett, Tiffany Russell

    2015-01-01

    Affordable and convenient access to electrical power is essential for all spacecraft and is a critical design driver for the next generation of smallsats, including CubeSats, which are currently extremely power limited. The Lightweight Integrated Solar Array (LISA), a concept designed, prototyped, and tested at the NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama provides an affordable, lightweight, scalable, and easily manufactured approach for power generation in space. This flexible technology has many wide-ranging applications from serving small satellites to providing abundant power to large spacecraft in GEO and beyond. By using very thin, ultraflexible solar arrays adhered to an inflatable or deployable structure, a large area (and thus large amount of power) can be folded and packaged into a relatively small volume.

  13. Maximizing the Scientific Return of Low Cost Planetary Missions Using Solar Electric Propulsion(abstract)

    NASA Technical Reports Server (NTRS)

    Russell, C. T.; Metzger, A.; Pieters, C.; Elphic, R. C.; McCord, T.; Head, J.; Abshire, J.; Philips, R.; Sykes, M.; A'Hearn, M.; hide

    1994-01-01

    After many years of development, solar electric propulsion is now a practical low cost alternative for many planetary missions. In response to the recent Discovery AO, we and a number of colleagues have examined the scientific return from a missioon to map the Moon and then rendezvous with a small body. In planning this mission, we found that solar electric propulsion was quite affordable under the Discovery guidelines, that many targets could be reached more rapidly with solar electric propulsion than chemical propulsion, that a large number of planetary bodies were accessible with modest propulsion systems, and that such missions were quite adaptable, with generous launch windows which minimized mission risks. Moreover, solar electric propulsion is ideally suited for large payloads requiring a large amount of power.

  14. Challenges for Wireless Mesh Networks to provide reliable carrier-grade services

    NASA Astrophysics Data System (ADS)

    von Hugo, D.; Bayer, N.

    2011-08-01

    Provision of mobile and wireless services today within a competitive environment and driven by a huge amount of steadily emerging new services and applications is both challenge and chance for radio network operators. Deployment and operation of an infrastructure for mobile and wireless broadband connectivity generally requires planning effort and large investments. A promising approach to reduce expenses for radio access networking is offered by Wireless Mesh Networks (WMNs). Here traditional dedicated backhaul connections to each access point are replaced by wireless multi-hop links between neighbouring access nodes and few gateways to the backbone employing standard radio technology. Such a solution provides at the same time high flexibility in both deployment and the amount of offered capacity and shall reduce overall expenses. On the other hand currently available mesh solutions do not provide carrier grade service quality and reliability and often fail to cope with high traffic load. EU project CARMEN (CARrier grade MEsh Networks) was initiated to incorporate different heterogeneous technologies and new protocols to allow for reliable transmission over "best effort" radio channels, to support a reliable mobility and network management, self-configuration and dynamic resource usage, and thus to offer a permanent or temporary broadband access at high cost efficiency. The contribution provides an overview on preliminary project results with focus on main technical challenges from a research and implementation point of view. Especially impact of mesh topology on the overall system performance in terms of throughput and connection reliability and aspects of a dedicated hybrid mobility management solution will be discussed.

  15. Fruit cuticle lipid composition and water loss in a diverse collection of pepper (Capsicum).

    PubMed

    Parsons, Eugene P; Popopvsky, Sigal; Lohrey, Gregory T; Alkalai-Tuvia, Sharon; Perzelan, Yaacov; Bosland, Paul; Bebeli, Penelope J; Paran, Ilan; Fallik, Elazar; Jenks, Matthew A

    2013-10-01

    Pepper (Capsicum spp.) fruits are covered by a relatively thick coating of cuticle that limits fruit water loss, a trait previously associated with maintenance of postharvest fruit quality during commercial marketing. To shed light on the chemical-compositional diversity of cuticles in pepper, the fruit cuticles from 50 diverse pepper genotypes from a world collection were screened for both wax and cutin monomer amount and composition. These same genotypes were also screened for fruit water loss rate and this was tested for associations with cuticle composition. Our results revealed an unexpectedly large amount of variation for the fruit cuticle lipids, with a more than 14-fold range for total wax amounts and a more than 16-fold range for cutin monomer amounts between the most extreme accessions. Within the major wax constituents fatty acids varied from 1 to 46%, primary alcohols from 2 to 19%, n-alkanes from 13 to 74% and triterpenoids and sterols from 10 to 77%. Within the cutin monomers, total hexadecanoic acids ranged from 54 to 87%, total octadecanoic acids ranged from 10 to 38% and coumaric acids ranged from 0.2 to 8% of the total. We also observed considerable differences in water loss among the accessions, and unique correlations between water loss and cuticle constituents. The resources described here will be valuable for future studies of the physiological function of fruit cuticle, for the identification of genes and QTLs associated with fruit cuticle synthesis in pepper fruit, and as a starting point for breeding improved fruit quality in pepper. © 2013 Scandinavian Plant Physiology Society.

  16. Massive Sorghum Collection Genotyped with SSR Markers to Enhance Use of Global Genetic Resources

    PubMed Central

    Bouchet, Sophie; Chantereau, Jacques; Deu, Monique; Gardes, Laetitia; Noyer, Jean-Louis; Rami, Jean-François; Rivallan, Ronan; Li, Yu; Lu, Ping; Wang, Tianyu; Folkertsma, Rolf T.; Arnaud, Elizabeth; Upadhyaya, Hari D.; Glaszmann, Jean-Christophe; Hash, C. Thomas

    2013-01-01

    Large ex situ collections require approaches for sampling manageable amounts of germplasm for in-depth characterization and use. We present here a large diversity survey in sorghum with 3367 accessions and 41 reference nuclear SSR markers. Of 19 alleles on average per locus, the largest numbers of alleles were concentrated in central and eastern Africa. Cultivated sorghum appeared structured according to geographic regions and race within region. A total of 13 groups of variable size were distinguished. The peripheral groups in western Africa, southern Africa and eastern Asia were the most homogeneous and clearly differentiated. Except for Kafir, there was little correspondence between races and marker-based groups. Bicolor, Caudatum, Durra and Guinea types were each dispersed in three groups or more. Races should therefore better be referred to as morphotypes. Wild and weedy accessions were very diverse and scattered among cultivated samples, reinforcing the idea that large gene-flow exists between the different compartments. Our study provides an entry to global sorghum germplasm collections. Our reference marker kit can serve to aggregate additional studies and enhance international collaboration. We propose a core reference set in order to facilitate integrated phenotyping experiments towards refined functional understanding of sorghum diversity. PMID:23565161

  17. The Viking viewer for connectomics: scalable multi-user annotation and summarization of large volume data sets

    PubMed Central

    ANDERSON, JR; MOHAMMED, S; GRIMM, B; JONES, BW; KOSHEVOY, P; TASDIZEN, T; WHITAKER, R; MARC, RE

    2011-01-01

    Modern microscope automation permits the collection of vast amounts of continuous anatomical imagery in both two and three dimensions. These large data sets present significant challenges for data storage, access, viewing, annotation and analysis. The cost and overhead of collecting and storing the data can be extremely high. Large data sets quickly exceed an individual's capability for timely analysis and present challenges in efficiently applying transforms, if needed. Finally annotated anatomical data sets can represent a significant investment of resources and should be easily accessible to the scientific community. The Viking application was our solution created to view and annotate a 16.5 TB ultrastructural retinal connectome volume and we demonstrate its utility in reconstructing neural networks for a distinctive retinal amacrine cell class. Viking has several key features. (1) It works over the internet using HTTP and supports many concurrent users limited only by hardware. (2) It supports a multi-user, collaborative annotation strategy. (3) It cleanly demarcates viewing and analysis from data collection and hosting. (4) It is capable of applying transformations in real-time. (5) It has an easily extensible user interface, allowing addition of specialized modules without rewriting the viewer. PMID:21118201

  18. Making large amounts of meteorological plots easily accessible to users

    NASA Astrophysics Data System (ADS)

    Lamy-Thepaut, Sylvie; Siemen, Stephan; Sahin, Cihan; Raoult, Baudouin

    2015-04-01

    The European Centre for Medium-Range Weather Forecasts (ECMWF) is an international organisation providing its member organisations with forecasts in the medium time range of 3 to 15 days, and some longer-range forecasts for up to a year ahead, with varying degrees of detail. As part of its mission, ECMWF generates an increasing number of forecast data products for its users. To support the work of forecasters and researchers and to let them make best use of ECMWF forecasts, the Centre also provides tools and interfaces to visualise their products. This allows users to make use of and explore forecasts without having to transfer large amounts of raw data. This is especially true for products based on ECMWF's 50 member ensemble forecast, where some specific processing and visualisation are applied to extract information. Every day, thousands of raw data are being pushed to the ECMWF's interactive web charts application called ecCharts, and thousands of products are processed and pushed to ECMWF's institutional web site ecCharts provides a highly interactive application to display and manipulate recent numerical forecasts to forecasters in national weather services and ECMWF's commercial customers. With ecCharts forecasters are able to explore ECMWF's medium-range forecasts in far greater detail than has previously been possible on the web, and this as soon as the forecast becomes available. All ecCharts's products are also available through a machine-to-machine web map service based on the OGC Web Map Service (WMS) standard. ECMWF institutional web site provides access to a large number of graphical products. It was entirely redesigned last year. It now shares the same infrastructure as ECMWF's ecCharts, and can benefit of some ecCharts functionalities, for example the dashboard. The dashboard initially developed for ecCharts allows users to organise their own collection of products depending on their work flow, and is being further developed. In its first implementation, It presents the user's products in a single interface with fast access to the original product, and possibilities of synchronous animations between them. But its functionalities are being extended to give users the freedom to collect not only ecCharts's 2D maps and graphs, but also other ECMWF Web products such as monthly and seasonal products, scores, and observation monitoring. The dashboard will play a key role to help the user to interpret the large amount of information that ECMWF is providing. This talk will present examples of how the new user interface can organise complex meteorological maps and graphs and show the new possibilities users have gained by using the web as a medium.

  19. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2011-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.

  20. Parallel In Situ Indexing for Data-intensive Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jinoh; Abbasi, Hasan; Chacon, Luis

    2011-09-09

    As computing power increases exponentially, vast amount of data is created by many scientific re- search activities. However, the bandwidth for storing the data to disks and reading the data from disks has been improving at a much slower pace. These two trends produce an ever-widening data access gap. Our work brings together two distinct technologies to address this data access issue: indexing and in situ processing. From decades of database research literature, we know that indexing is an effective way to address the data access issue, particularly for accessing relatively small fraction of data records. As data sets increasemore » in sizes, more and more analysts need to use selective data access, which makes indexing an even more important for improving data access. The challenge is that most implementations of in- dexing technology are embedded in large database management systems (DBMS), but most scientific datasets are not managed by any DBMS. In this work, we choose to include indexes with the scientific data instead of requiring the data to be loaded into a DBMS. We use compressed bitmap indexes from the FastBit software which are known to be highly effective for query-intensive workloads common to scientific data analysis. To use the indexes, we need to build them first. The index building procedure needs to access the whole data set and may also require a significant amount of compute time. In this work, we adapt the in situ processing technology to generate the indexes, thus removing the need of read- ing data from disks and to build indexes in parallel. The in situ data processing system used is ADIOS, a middleware for high-performance I/O. Our experimental results show that the indexes can improve the data access time up to 200 times depending on the fraction of data selected, and using in situ data processing system can effectively reduce the time needed to create the indexes, up to 10 times with our in situ technique when using identical parallel settings.« less

  1. The War on Poverty’s Experiment in Public Medicine: Community Health Centers and the Mortality of Older Americans†

    PubMed Central

    Bailey, Martha J.; Goodman-Bacon, Andrew

    2015-01-01

    This paper uses the rollout of the first Community Health Centers (CHCs) to study the longer-term health effects of increasing access to primary care. Within ten years, CHCs are associated with a reduction in age-adjusted mortality rates of 2 percent among those 50 and older. The implied 7 to 13 percent decrease in one-year mortality risk among beneficiaries amounts to 20 to 40 percent of the 1966 poor/non-poor mortality gap for this age group. Large effects for those 65 and older suggest that increased access to primary care has longer-term benefits, even for populations with near universal health insurance. (JEL H75, I12, I13, I18, I32, I38, J14) PMID:25999599

  2. Green survivability in Fiber-Wireless (FiWi) broadband access network

    NASA Astrophysics Data System (ADS)

    Liu, Yejun; Guo, Lei; Gong, Bo; Ma, Rui; Gong, Xiaoxue; Zhang, Lincong; Yang, Jiangzi

    2012-03-01

    Fiber-Wireless (FiWi) broadband access network is a promising "last mile" access technology, because it integrates wireless and optical access technologies in terms of their respective merits, such as high capacity and stable transmission from optical access technology, and easy deployment and flexibility from wireless access technology. Since FiWi is expected to carry a large amount of traffic, numerous traffic flows may be interrupted by the failure of network components. Thus, survivability in FiWi is a key issue aiming at reliable and robust service. However, the redundant deployment of backup resource required for survivability usually causes huge energy consumption, which aggravates the global warming and accelerates the incoming of energy crisis. Thus, the energy-saving issue should be considered when it comes to survivability design. In this paper, we focus on the green survivability in FiWi, which is an innovative concept and remains untouched in the previous works to our best knowledge. We first review and discuss some challenging issues about survivability and energy-saving in FiWi, and then we propose some instructive solutions for its green survivability design. Therefore, our work in this paper will provide the technical references and research motivations for the energy-efficient and survivable FiWi development in the future.

  3. Distributed shared memory for roaming large volumes.

    PubMed

    Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno

    2006-01-01

    We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.

  4. Polyphenolic Composition and Antioxidant Activities of 6 New Turmeric (Curcuma Longa L.) Accessions.

    PubMed

    Chinedum, Eleazu; Kate, Eleazu; Sonia, Chukwuma; Ironkwe, Adanma; Andrew, Igwe

    2015-01-01

    The phytochemical composition and antioxidant capacities of 6 new NRCRI turmeric (Curcuma longa L.) accessions (39, 35, 60, 30, 50 and 41) were determined using standard techniques. The moisture contents of the tumeric samples ranged from 15.75 to 47.80% and the curcumin contents of the turmeric samples fell within the range of curcumin obtained from turmeric in other countries of the world. Furthermore, the turmeric accessions contained considerable amounts of antioxidants (measured using 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical and reducing power assays), alkaloids, flavonoids, anthocyanins, and phenolics. There was significant correlation between the anthocyanin contents of the tumeric accessions versus their alkaloid (0.744) and flavonoid contents (0.986) suggesting an additive effect between the anthocyanins and alkaloids in turmeric; significant correlation between the inhibition of the turmeric accessions on DPPH radical versus their flavonoid (0.892) and anthocyanin (0.949) contents and significant correlation between the reducing power of the turmeric accessions versus their flavonoid (0.973) and anthocyanin (0.974) contents suggesting that anthocyanins as flavonoids largely contribute to the antioxidant activities of turmeric. The positive regression recorded between inhibition of DPPH radical by the turmeric accessions and quercetin versus reducing power (R2 = 0.852) suggest that any of these methods could be used to assess the antioxidant activities of tumeric. Finally, the study indicated the potentials of the turmeric accessions especially accessions 30 and 50 as promising sources of antioxidants.

  5. Compression for an effective management of telemetry data

    NASA Technical Reports Server (NTRS)

    Arcangeli, J.-P.; Crochemore, M.; Hourcastagnou, J.-N.; Pin, J.-E.

    1993-01-01

    A Technological DataBase (T.D.B.) records all the values taken by the physical on-board parameters of a satellite since launch time. The amount of temporal data is very large (about 15 Gbytes for the satellite TDF1) and an efficient system must allow users to have a fast access to any value. This paper presents a new solution for T.D.B. management. The main feature of our new approach is the use of lossless data compression methods. Several parametrizable data compression algorithms based on substitution, relative difference and run-length encoding are available. Each of them is dedicated to a specific type of variation of the parameters' values. For each parameter, an analysis of stability is performed at decommutation time, and then the best method is chosen and run. A prototype intended to process different sorts of satellites has been developed. Its performances are well beyond the requirements and prove that data compression is both time and space efficient. For instance, the amount of data for TDF1 has been reduced to 1.05 Gbytes (compression ratio is 1/13) and access time for a typical query has been reduced from 975 seconds to 14 seconds.

  6. How Much Is Too Much to Pay for Internet Access? A Behavioral Economic Analysis of Internet Use.

    PubMed

    Broadbent, Julie; Dakki, Michelle A

    2015-08-01

    The popularity of online recreational activities, such as social networking, has dramatically increased the amount of time spent on the Internet. Excessive or inappropriate use of the Internet can result in serious adverse consequences. The current study used a behavioral economic task to determine if the amount of time spent online by problematic and nonproblematic users can be modified by price. The Internet Purchase Task was used to determine how much time undergraduate students (N=233) would spend online at 13 different prices. Despite high demand for Internet access when access was free, time spent online by both problematic and nonproblematic users decreased dramatically, even at low prices. These results suggest that the amount of time spent online may be modified by having a tangible cost associated with use, whereas having free access to the Internet may encourage excessive, problematic use.

  7. Mobile App to Assess Universal Access Compliance.

    PubMed

    Fransolet, Colette

    2016-01-01

    In terms of local legislation, South Africa has a handful of regulations that indirectly require Universal Access, which is then in itself largely described as facilities for people with disabilities. The most predominant set of regulations is the South African National Building Regulations, with a specific code which is deemed to satisfy standard titled South African National Standard (SANS) 10400 Part S: Facilities for Persons with Disabilities. Revised in 2011, this building regulation offers some technical guidelines specific to built infrastructure, and largely for people with functional mobility limitations. The description of the term "functional mobility limitations" in the context of this paper refers to people who make use of mobility aids to assist with their functionality in an environment, for example people who use walking aids (sticks, canes or walkers) and people who use wheelchairs. Albeit lacking in specifics around the requirements for other areas of functional limitations, including people who are blind, people who are deaf, and people with cognitive limitations, the SANS 10400 Part S is, to date, the most effective regulatory requirement in the country to assist with making facilities more accessible. With only a few experts in South Africa working in the field, the ability to offer clients Universal Access Reviews in terms of basic compliance with the SANS 10400 Part S is limited by two major factors. Firstly, the costs associated with employing experts in the field to review infrastructure is mostly too exorbitant for clients to carry. Secondly, the amount of time taken to perform reviews onsite and then collate the information into a coherent report for the client is far too long. These aspects result in a gap between clients wanting to meet the requirements, and being able to have the work completed in a reasonable amount of time. To overcome the challenge of larger institutions and organizations wanting to have their facilities reviewed in terms of compliance with National Building Regulations, within a tight budget as well as within minimal timeframes, an innovative mobile application was developed by Universal Design Africa. This App heralds the dawn of a new method to measure universal access compliance. The operation and format of this technology and its application could be adapted to meet all forms of compliance and information gathering, including international regulations and best practice.

  8. SHIPPING CONTAINER FOR RADIOACTIVE MATERIAL

    DOEpatents

    Nachbar, H.D.; Biggs, B.B.; Tariello, P.J.; George, K.O.

    1963-01-15

    A shipping container is described for transponting a large number of radioactive nuclear fuel element modules which produce a substantial amount of heat. The container comprises a primary pressure vessel and shield, and a rotatable head having an access port that can be indexed with module holders in the container. In order to remove heat generated in the fuel eleme nts, a heat exchanger is arranged within the container and in contact with a heat exchange fluid therein. The heat exchanger communicates with additional external heat exchangers, which dissipate heat to the atmosphere. (AEC)

  9. A convolutional neural network-based screening tool for X-ray serial crystallography.

    PubMed

    Ke, Tsung Wei; Brewster, Aaron S; Yu, Stella X; Ushizima, Daniela; Yang, Chao; Sauter, Nicholas K

    2018-05-01

    A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization. open access.

  10. Incorporating Oracle on-line space management with long-term archival technology

    NASA Technical Reports Server (NTRS)

    Moran, Steven M.; Zak, Victor J.

    1996-01-01

    The storage requirements of today's organizations are exploding. As computers continue to escalate in processing power, applications grow in complexity and data files grow in size and in number. As a result, organizations are forced to procure more and more megabytes of storage space. This paper focuses on how to expand the storage capacity of a Very Large Database (VLDB) cost-effectively within a Oracle7 data warehouse system by integrating long term archival storage sub-systems with traditional magnetic media. The Oracle architecture described in this paper was based on an actual proof of concept for a customer looking to store archived data on optical disks yet still have access to this data without user intervention. The customer had a requirement to maintain 10 years worth of data on-line. Data less than a year old still had the potential to be updated thus will reside on conventional magnetic disks. Data older than a year will be considered archived and will be placed on optical disks. The ability to archive data to optical disk and still have access to that data provides the system a means to retain large amounts of data that is readily accessible yet significantly reduces the cost of total system storage. Therefore, the cost benefits of archival storage devices can be incorporated into the Oracle storage medium and I/O subsystem without loosing any of the functionality of transaction processing, yet at the same time providing an organization access to all their data.

  11. Dust-obscured star-forming galaxies in the early universe

    NASA Astrophysics Data System (ADS)

    Wilkins, Stephen M.; Feng, Yu; Di Matteo, Tiziana; Croft, Rupert; Lovell, Christopher C.; Thomas, Peter

    2018-02-01

    Motivated by recent observational constraints on dust reprocessed emission in star-forming galaxies at z ∼ 6 and above, we use the very large cosmological hydrodynamical simulation BLUETIDES to explore predictions for the amount of dust-obscured star formation in the early Universe (z > 8). BLUETIDES matches current observational constraints on both the UV luminosity function and galaxy stellar mass function and predicts that approximately 90 per cent of the star formation in high-mass (M* > 1010 M⊙) galaxies at z = 8 is already obscured by dust. The relationship between dust attenuation and stellar mass predicted by BLUETIDES is consistent with that observed at lower redshift. However, observations of several individual objects at z > 6 are discrepant with the predictions, though it is possible that their uncertainties may have been underestimated. We find that the predicted surface density of z ≥ 8 submm sources is below that accessible to current Herschel, SCUBA-2 and Atacama Large Millimetre Array (ALMA) submm surveys. However, as ALMA continues to accrue an additional surface area the population of z > 8 dust-obscured galaxies may become accessible in the near future.

  12. A Ubiquitous Sensor Network Platform for Integrating Smart Devices into the Semantic Sensor Web

    PubMed Central

    de Vera, David Díaz Pardo; Izquierdo, Álvaro Sigüenza; Vercher, Jesús Bernat; Gómez, Luis Alfonso Hernández

    2014-01-01

    Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs. PMID:24945678

  13. A ubiquitous sensor network platform for integrating smart devices into the semantic sensor web.

    PubMed

    de Vera, David Díaz Pardo; Izquierdo, Alvaro Sigüenza; Vercher, Jesús Bernat; Hernández Gómez, Luis Alfonso

    2014-06-18

    Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs.

  14. SSBD: a database of quantitative data of spatiotemporal dynamics of biological phenomena

    PubMed Central

    Tohsato, Yukako; Ho, Kenneth H. L.; Kyoda, Koji; Onami, Shuichi

    2016-01-01

    Motivation: Rapid advances in live-cell imaging analysis and mathematical modeling have produced a large amount of quantitative data on spatiotemporal dynamics of biological objects ranging from molecules to organisms. There is now a crucial need to bring these large amounts of quantitative biological dynamics data together centrally in a coherent and systematic manner. This will facilitate the reuse of this data for further analysis. Results: We have developed the Systems Science of Biological Dynamics database (SSBD) to store and share quantitative biological dynamics data. SSBD currently provides 311 sets of quantitative data for single molecules, nuclei and whole organisms in a wide variety of model organisms from Escherichia coli to Mus musculus. The data are provided in Biological Dynamics Markup Language format and also through a REST API. In addition, SSBD provides 188 sets of time-lapse microscopy images from which the quantitative data were obtained and software tools for data visualization and analysis. Availability and Implementation: SSBD is accessible at http://ssbd.qbic.riken.jp. Contact: sonami@riken.jp PMID:27412095

  15. Program Analyzes Radar Altimeter Data

    NASA Technical Reports Server (NTRS)

    Vandemark, Doug; Hancock, David; Tran, Ngan

    2004-01-01

    A computer program has been written to perform several analyses of radar altimeter data. The program was designed to improve on previous methods of analysis of altimeter engineering data by (1) facilitating and accelerating the analysis of large amounts of data in a more direct manner and (2) improving the ability to estimate performance of radar-altimeter instrumentation and provide data corrections. The data in question are openly available to the international scientific community and can be downloaded from anonymous file-transfer- protocol (FTP) locations that are accessible via links from altimetry Web sites. The software estimates noise in range measurements, estimates corrections for electromagnetic bias, and performs statistical analyses on various parameters for comparison of different altimeters. Whereas prior techniques used to perform similar analyses of altimeter range noise require comparison of data from repetitions of satellite ground tracks, the present software uses a high-pass filtering technique to obtain similar results from single satellite passes. Elimination of the requirement for repeat-track analysis facilitates the analysis of large amounts of satellite data to assess subtle variations in range noise.

  16. SSBD: a database of quantitative data of spatiotemporal dynamics of biological phenomena.

    PubMed

    Tohsato, Yukako; Ho, Kenneth H L; Kyoda, Koji; Onami, Shuichi

    2016-11-15

    Rapid advances in live-cell imaging analysis and mathematical modeling have produced a large amount of quantitative data on spatiotemporal dynamics of biological objects ranging from molecules to organisms. There is now a crucial need to bring these large amounts of quantitative biological dynamics data together centrally in a coherent and systematic manner. This will facilitate the reuse of this data for further analysis. We have developed the Systems Science of Biological Dynamics database (SSBD) to store and share quantitative biological dynamics data. SSBD currently provides 311 sets of quantitative data for single molecules, nuclei and whole organisms in a wide variety of model organisms from Escherichia coli to Mus musculus The data are provided in Biological Dynamics Markup Language format and also through a REST API. In addition, SSBD provides 188 sets of time-lapse microscopy images from which the quantitative data were obtained and software tools for data visualization and analysis. SSBD is accessible at http://ssbd.qbic.riken.jp CONTACT: sonami@riken.jp. © The Author 2016. Published by Oxford University Press.

  17. First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS

    NASA Astrophysics Data System (ADS)

    Aperio Bella, L.; Barberis, D.; Buttinger, W.; Formica, A.; Gallas, E. J.; Rinaldi, L.; Rybkin, G.; ATLAS Collaboration

    2017-10-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF are effectively read by the software as binary objects, this class of data appears ideal for testing the proposed Run 3 conditions data infrastructure now in development. This paper describes this implementation as well as the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  18. Efficient Memory Access with NumPy Global Arrays using Local Memory Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Jeffrey A.; Berghofer, Dan C.

    This paper discusses the work completed working with Global Arrays of data on distributed multi-computer systems and improving their performance. The tasks completed were done at Pacific Northwest National Laboratory in the Science Undergrad Laboratory Internship program in the summer of 2013 for the Data Intensive Computing Group in the Fundamental and Computational Sciences DIrectorate. This work was done on the Global Arrays Toolkit developed by this group. This toolkit is an interface for programmers to more easily create arrays of data on networks of computers. This is useful because scientific computation is often done on large amounts of datamore » sometimes so large that individual computers cannot hold all of it. This data is held in array form and can best be processed on supercomputers which often consist of a network of individual computers doing their computation in parallel. One major challenge for this sort of programming is that operations on arrays on multiple computers is very complex and an interface is needed so that these arrays seem like they are on a single computer. This is what global arrays does. The work done here is to use more efficient operations on that data that requires less copying of data to be completed. This saves a lot of time because copying data on many different computers is time intensive. The way this challenge was solved is when data to be operated on with binary operations are on the same computer, they are not copied when they are accessed. When they are on separate computers, only one set is copied when accessed. This saves time because of less copying done although more data access operations were done.« less

  19. Special issue on enabling open and interoperable access to Planetary Science and Heliophysics databases and tools

    NASA Astrophysics Data System (ADS)

    2018-01-01

    The large amount of data generated by modern space missions calls for a change of organization of data distribution and access procedures. Although long term archives exist for telescopic and space-borne observations, high-level functions need to be developed on top of these repositories to make Planetary Science and Heliophysics data more accessible and to favor interoperability. Results of simulations and reference laboratory data also need to be integrated to support and interpret the observations. Interoperable software and interfaces have recently been developed in many scientific domains. The Virtual Observatory (VO) interoperable standards developed for Astronomy by the International Virtual Observatory Alliance (IVOA) can be adapted to Planetary Sciences, as demonstrated by the VESPA (Virtual European Solar and Planetary Access) team within the Europlanet-H2020-RI project. Other communities have developed their own standards: GIS (Geographic Information System) for Earth and planetary surfaces tools, SPASE (Space Physics Archive Search and Extract) for space plasma, PDS4 (NASA Planetary Data System, version 4) and IPDA (International Planetary Data Alliance) for planetary mission archives, etc, and an effort to make them interoperable altogether is starting, including automated workflows to process related data from different sources.

  20. High school students as a seismic network analysts

    NASA Astrophysics Data System (ADS)

    Filatov, P.; Fedorenko, Yu.; Beketova, E.; Husebye, E.

    2003-04-01

    Many research organizations have a large amount of collected seismological data. Some data centers keep data closed from scientists, others have a specific interfaces for access, what is not acceptable for education. For SeisSchool Network in Norway we have developed an universal interface for research and study. The main principles of our interface are: bullet Accessibility - it should provides data access for everybody any where via Internet without restrictions of hardware platform, operational system, Internet browser or bandwidth of connection. bullet Informativity - it should visualize data, have examples of processing routines (filters, envelopes) including phase picking and event location. Also it provides access to various seismology information. bullet Scalability - provide storage for various types of seismic data and a multitude of services for many user levels. This interface (http://pcg1.ifjf.uib.no) helps analysts in basic research and together with information of our Web site we introduces students to theory and practice of seismology. Based on our Web interface group of students won a Norwegian Young Scientists award. In this presentation we demonstrate advantages of our interface, on-line data processing and how to monitoring our network in near real time.

  1. Boosting the FM-Index on the GPU: Effective Techniques to Mitigate Random Memory Access.

    PubMed

    Chacón, Alejandro; Marco-Sola, Santiago; Espinosa, Antonio; Ribeca, Paolo; Moure, Juan Carlos

    2015-01-01

    The recent advent of high-throughput sequencing machines producing big amounts of short reads has boosted the interest in efficient string searching techniques. As of today, many mainstream sequence alignment software tools rely on a special data structure, called the FM-index, which allows for fast exact searches in large genomic references. However, such searches translate into a pseudo-random memory access pattern, thus making memory access the limiting factor of all computation-efficient implementations, both on CPUs and GPUs. Here, we show that several strategies can be put in place to remove the memory bottleneck on the GPU: more compact indexes can be implemented by having more threads work cooperatively on larger memory blocks, and a k-step FM-index can be used to further reduce the number of memory accesses. The combination of those and other optimisations yields an implementation that is able to process about two Gbases of queries per second on our test platform, being about 8 × faster than a comparable multi-core CPU version, and about 3 × to 5 × faster than the FM-index implementation on the GPU provided by the recently announced Nvidia NVBIO bioinformatics library.

  2. The Viking viewer for connectomics: scalable multi-user annotation and summarization of large volume data sets.

    PubMed

    Anderson, J R; Mohammed, S; Grimm, B; Jones, B W; Koshevoy, P; Tasdizen, T; Whitaker, R; Marc, R E

    2011-01-01

    Modern microscope automation permits the collection of vast amounts of continuous anatomical imagery in both two and three dimensions. These large data sets present significant challenges for data storage, access, viewing, annotation and analysis. The cost and overhead of collecting and storing the data can be extremely high. Large data sets quickly exceed an individual's capability for timely analysis and present challenges in efficiently applying transforms, if needed. Finally annotated anatomical data sets can represent a significant investment of resources and should be easily accessible to the scientific community. The Viking application was our solution created to view and annotate a 16.5 TB ultrastructural retinal connectome volume and we demonstrate its utility in reconstructing neural networks for a distinctive retinal amacrine cell class. Viking has several key features. (1) It works over the internet using HTTP and supports many concurrent users limited only by hardware. (2) It supports a multi-user, collaborative annotation strategy. (3) It cleanly demarcates viewing and analysis from data collection and hosting. (4) It is capable of applying transformations in real-time. (5) It has an easily extensible user interface, allowing addition of specialized modules without rewriting the viewer. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  3. Effects of network node consolidation in optical access and aggregation networks on costs and power consumption

    NASA Astrophysics Data System (ADS)

    Lange, Christoph; Hülsermann, Ralf; Kosiankowski, Dirk; Geilhardt, Frank; Gladisch, Andreas

    2010-01-01

    The increasing demand for higher bit rates in access networks requires fiber deployment closer to the subscriber resulting in fiber-to-the-home (FTTH) access networks. Besides higher access bit rates optical access network infrastructure and related technologies enable the network operator to establish larger service areas resulting in a simplified network structure with a lower number of network nodes. By changing the network structure network operators want to benefit from a changed network cost structure by decreasing in short and mid term the upfront investments for network equipment due to concentration effects as well as by reducing the energy costs due to a higher energy efficiency of large network sites housing a high amount of network equipment. In long term also savings in operational expenditures (OpEx) due to the closing of central office (CO) sites are expected. In this paper different architectures for optical access networks basing on state-of-the-art technology are analyzed with respect to network installation costs and power consumption in the context of access node consolidation. Network planning and dimensioning results are calculated for a realistic network scenario of Germany. All node consolidation scenarios are compared against a gigabit capable passive optical network (GPON) based FTTH access network operated from the conventional CO sites. The results show that a moderate reduction of the number of access nodes may be beneficial since in that case the capital expenditures (CapEx) do not rise extraordinarily and savings in OpEx related to the access nodes are expected. The total power consumption does not change significantly with decreasing number of access nodes but clustering effects enable a more energyefficient network operation and optimized power purchase order quantities leading to benefits in energy costs.

  4. 78 FR 38227 - Connect America Fund

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-26

    ... different per- location support amounts based on the existing level of Internet access ($550 for homes with low-speed Internet access and $775, as in the first round, for homes with only dial-up access), and... Internet access. We adopt a process for challenges to the eligibility of specific areas where price cap...

  5. The Design of PC/MISI, a PC-Based Common User Interface to Remote Information Storage and Retrieval Systems. M.S. ThesisFinal Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Hall, Philip P.

    1985-01-01

    The amount of information contained in the data bases of large-scale information storage and retrieval systems is very large and growing at a rapid rate. The methods available for assessing this information have not been successful in making the information easily available to the people who have the greatest need for it. This thesis describes the design of a personal computer based system which will provide a means for these individuals to retrieve this data through one standardized interface. The thesis identifies each of the major problems associated with providing access to casual users of IS and R systems and describes the manner in which these problems are to be solved by the utilization of the local processing power of a PC. Additional capabilities, not available with standard access methods, are also provided to improve the user's ability to make use of this information. The design of PC/MISI is intended to facilitate its use as a research vehicle. Evaluation mechanisms and possible areas of future research are described. The PC/MISI development effort is part of a larger research effort directed at improving access to remote IS and R systems. This research effort, supported in part by NASA, is also reviewed.

  6. Converting information from paper to optical media

    NASA Technical Reports Server (NTRS)

    Deaton, Timothy N.; Tiller, Bruce K.

    1990-01-01

    The technology of converting large amounts of paper into electronic form is described for use in information management systems based on optical disk storage. The space savings and photographic nature of microfiche are combined in these systems with the advantages of computerized data (fast and flexible retrieval of graphics and text, simultaneous instant access for multiple users, and easy manipulation of data). It is noted that electronic imaging systems offer a unique opportunity to dramatically increase the productivity and profitability of information systems. Particular attention is given to the CALS (Computer-aided Aquisition and Logistic Support) system.

  7. Contrast use in relation to the arterial access site for percutaneous coronary intervention: A comprehensive meta-analysis of randomized trials

    PubMed Central

    Shah, Rahman; Mattox, Anthony; Khan, M Rehan; Berzingi, Chalak; Rashid, Abdul

    2017-01-01

    AIM To compare the amount of contrast used during percutaneous coronary intervention (PCI) via trans-radial access (TRA) vs trans-femoral access (TFA). METHODS Scientific databases and websites were searched for:randomizedcontrolledtrials (RCTs). Data were extracted by two independent reviewers and was summarized as the weighted mean difference (WMD) of contrast used with a 95%CI using a random-effects model. RESULTS The meta-analysis included 13 RCTs with a total of 3165 patients. There was no difference between the two strategies in the amount of contrast used (WMD = - 0.65 mL, 95%CI: -10.94-9.46 mL; P = 0.901). CONCLUSION This meta-analysis shows that in patients undergoing PCI, the amount of contrast volume used was not different between TRA and TFA. PMID:28515857

  8. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  9. Using external data sources to improve audit trail analysis.

    PubMed

    Herting, R L; Asaro, P V; Roth, A C; Barnes, M R

    1999-01-01

    Audit trail analysis is the primary means of detection of inappropriate use of the medical record. While audit logs contain large amounts of information, the information required to determine useful user-patient relationships is often not present. Adequate information isn't present because most audit trail analysis systems rely on the limited information available within the medical record system. We report a feature of the STAR (System for Text Archive and Retrieval) audit analysis system where information available in the medical record is augmented with external information sources such as: database sources, Light-weight Directory Access Protocol (LDAP) server sources, and World Wide Web (WWW) database sources. We discuss several issues that arise when combining the information from each of these disparate information sources. Furthermore, we explain how the enhanced person specific information obtained can be used to determine user-patient relationships that might signify a motive for inappropriately accessing a patient's medical record.

  10. Visualization of Multi-mission Astronomical Data with ESASky

    NASA Astrophysics Data System (ADS)

    Baines, Deborah; Giordano, Fabrizio; Racero, Elena; Salgado, Jesús; López Martí, Belén; Merín, Bruno; Sarmiento, María-Henar; Gutiérrez, Raúl; Ortiz de Landaluce, Iñaki; León, Ignacio; de Teodoro, Pilar; González, Juan; Nieto, Sara; Segovia, Juan Carlos; Pollock, Andy; Rosa, Michael; Arviset, Christophe; Lennon, Daniel; O'Mullane, William; de Marchi, Guido

    2017-02-01

    ESASky is a science-driven discovery portal to explore the multi-wavelength sky and visualize and access multiple astronomical archive holdings. The tool is a web application that requires no prior knowledge of any of the missions involved and gives users world-wide simplified access to the highest-level science data products from multiple astronomical space-based astronomy missions plus a number of ESA source catalogs. The first public release of ESASky features interfaces for the visualization of the sky in multiple wavelengths, the visualization of query results summaries, and the visualization of observations and catalog sources for single and multiple targets. This paper describes these features within ESASky, developed to address use cases from the scientific community. The decisions regarding the visualization of large amounts of data and the technologies used were made to maximize the responsiveness of the application and to keep the tool as useful and intuitive as possible.

  11. Design and evaluation of a hybrid storage system in HEP environment

    NASA Astrophysics Data System (ADS)

    Xu, Qi; Cheng, Yaodong; Chen, Gang

    2017-10-01

    Nowadays, the High Energy Physics experiments produce a large amount of data. These data are stored in mass storage systems which need to balance the cost, performance and manageability. In this paper, a hybrid storage system including SSDs (Solid-state Drive) and HDDs (Hard Disk Drive) is designed to accelerate data analysis and maintain a low cost. The performance of accessing files is a decisive factor for the HEP computing system. A new deployment model of Hybrid Storage System in High Energy Physics is proposed which is proved to have higher I/O performance. The detailed evaluation methods and the evaluations about SSD/HDD ratio, and the size of the logic block are also given. In all evaluations, sequential-read, sequential-write, random-read and random-write are all tested to get the comprehensive results. The results show the Hybrid Storage System has good performance in some fields such as accessing big files in HEP.

  12. Enhanced Information Retrieval Using AJAX

    NASA Astrophysics Data System (ADS)

    Kachhwaha, Rajendra; Rajvanshi, Nitin

    2010-11-01

    Information Retrieval deals with the representation, storage, organization of, and access to information items. The representation and organization of information items should provide the user with easy access to the information with the rapid development of Internet, large amounts of digitally stored information is readily available on the World Wide Web. This information is so huge that it becomes increasingly difficult and time consuming for the users to find the information relevant to their needs. The explosive growth of information on the Internet has greatly increased the need for information retrieval systems. However, most of the search engines are using conventional information retrieval systems. An information system needs to implement sophisticated pattern matching tools to determine contents at a faster rate. AJAX has recently emerged as the new tool such the of information retrieval process of information retrieval can become fast and information reaches the use at a faster pace as compared to conventional retrieval systems.

  13. Brazilian experiments in Mobile Learning for Health Professionals.

    PubMed

    Pereira, Samáris Ramiro; Loddi, Sueli Aparecida; Larangeira, Valmir Aparecido; Labrada, Luis; Bandiera-Paiva, Paulo

    2013-01-01

    The Distance Education has evolved with the available technology in each new decade. The evolution and spread of mobile technology from year 2000s enabled their migration to this new platform: The Mobile Learning. Making it possible for professionals and students can carry with multimedia tools with Internet access to learning tools or professional references. This new concept fits very well the needs of Health, in which students must absorb and put into practice large amounts of technical knowledge, and also professionals must stay constantly updated. Distance Education in Health has received prominence in Brazil. A country of a geographically dispersed group of professionals, and research & training centers concentrated in the capitals. Updating field teams is a difficult task, but the information has access to modern technologies, which contribute to the teachers who use them. This paper, through the methodology of literature review, presents technology experiments in health environments and their considerations.

  14. Return to nursing home investment: Issues for public policy

    PubMed Central

    Baldwin, Carliss Y.; Bishop, Christine E.

    1984-01-01

    Because Government policy does much to determine the return available to nursing home investment, the profitability of the nursing home industry has been a subject of controversy since Government agencies began paying a large portion of the Nation's nursing home bill. Controversy appears at several levels. First is the rather narrow concern, often conceived in accounting terms, of the appropriate reimbursement of capital-related expense under Medicaid and Medicare. Second is the concern about how return to capital affects the flow of investment into nursing homes, leading either to inadequate access to care or to over-capacity. Third is the concern about how-sources of return to nursing home investment affect the pattern of nursing home ownership and the amount of equity held by owners since the pattern of ownership and amount of equity have been linked to quality of care. PMID:10310945

  15. Web servers and services for electrostatics calculations with APBS and PDB2PQR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unni, Samir; Huang, Yong; Hanson, Robert M.

    APBS and PDB2PQR are widely utilized free software packages for biomolecular electrostatics calculations. Using the Opal toolkit, we have developed a web services framework for these software packages that enables the use of APBS and PDB2PQR by users who do not have local access to the necessary amount of computational capabilities. This not only increases accessibility of the software to a wider range of scientists, educators, and students but it also increases the availability of electrostatics calculations on portable computing platforms. Users can access this new functionality in two ways. First, an Opal-enabled version of APBS is provided in currentmore » distributions, available freely on the web. Second, we have extended the PDB2PQR web server to provide an interface for the setup, execution, and visualization electrostatics potentials as calculated by APBS. This web interface also uses the Opal framework which ensures the scalability needed to support the large APBS user community. Both of these resources are available from the APBS/PDB2PQR website: http://www.poissonboltzmann.org/.« less

  16. Web servers and services for electrostatics calculations with APBS and PDB2PQR

    PubMed Central

    Unni, Samir; Huang, Yong; Hanson, Robert; Tobias, Malcolm; Krishnan, Sriram; Li, Wilfred W.; Nielsen, Jens E.; Baker, Nathan A.

    2011-01-01

    APBS and PDB2PQR are widely utilized free software packages for biomolecular electrostatics calculations. Using the Opal toolkit, we have developed a Web services framework for these software packages that enables the use of APBS and PDB2PQR by users who do not have local access to the necessary amount of computational capabilities. This not only increases accessibility of the software to a wider range of scientists, educators, and students but it also increases the availability of electrostatics calculations on portable computing platforms. Users can access this new functionality in two ways. First, an Opal-enabled version of APBS is provided in current distributions, available freely on the web. Second, we have extended the PDB2PQR web server to provide an interface for the setup, execution, and visualization electrostatics potentials as calculated by APBS. This web interface also uses the Opal framework which ensures the scalability needed to support the large APBS user community. Both of these resources are available from the APBS/PDB2PQR website: http://www.poissonboltzmann.org/. PMID:21425296

  17. MouseNet database: digital management of a large-scale mutagenesis project.

    PubMed

    Pargent, W; Heffner, S; Schäble, K F; Soewarto, D; Fuchs, H; Hrabé de Angelis, M

    2000-07-01

    The Munich ENU Mouse Mutagenesis Screen is a large-scale mutant production, phenotyping, and mapping project. It encompasses two animal breeding facilities and a number of screening groups located in the general area of Munich. A central database is required to manage and process the immense amount of data generated by the mutagenesis project. This database, which we named MouseNet(c), runs on a Sybase platform and will finally store and process all data from the entire project. In addition, the system comprises a portfolio of functions needed to support the workflow management of the core facility and the screening groups. MouseNet(c) will make all of the data available to the participating screening groups, and later to the international scientific community. MouseNet(c) will consist of three major software components:* Animal Management System (AMS)* Sample Tracking System (STS)* Result Documentation System (RDS)MouseNet(c) provides the following major advantages:* being accessible from different client platforms via the Internet* being a full-featured multi-user system (including access restriction and data locking mechanisms)* relying on a professional RDBMS (relational database management system) which runs on a UNIX server platform* supplying workflow functions and a variety of plausibility checks.

  18. Oceans of Data: In what ways can learning research inform the development of electronic interfaces and tools for use by students accessing large scientific databases?

    NASA Astrophysics Data System (ADS)

    Krumhansl, R. A.; Foster, J.; Peach, C. L.; Busey, A.; Baker, I.

    2012-12-01

    The practice of science and engineering is being revolutionized by the development of cyberinfrastructure for accessing near real-time and archived observatory data. Large cyberinfrastructure projects have the potential to transform the way science is taught in high school classrooms, making enormous quantities of scientific data available, giving students opportunities to analyze and draw conclusions from many kinds of complex data, and providing students with experiences using state-of-the-art resources and techniques for scientific investigations. However, online interfaces to scientific data are built by scientists for scientists, and their design can significantly impede broad use by novices. Knowledge relevant to the design of student interfaces to complex scientific databases is broadly dispersed among disciplines ranging from cognitive science to computer science and cartography and is not easily accessible to designers of educational interfaces. To inform efforts at bridging scientific cyberinfrastructure to the high school classroom, Education Development Center, Inc. and the Scripps Institution of Oceanography conducted an NSF-funded 2-year interdisciplinary review of literature and expert opinion pertinent to making interfaces to large scientific databases accessible to and usable by precollege learners and their teachers. Project findings are grounded in the fundamentals of Cognitive Load Theory, Visual Perception, Schemata formation and Universal Design for Learning. The Knowledge Status Report (KSR) presents cross-cutting and visualization-specific guidelines that highlight how interface design features can address/ ameliorate challenges novice high school students face as they navigate complex databases to find data, and construct and look for patterns in maps, graphs, animations and other data visualizations. The guidelines present ways to make scientific databases more broadly accessible by: 1) adjusting the cognitive load imposed by the user interface and visualizations so that it doesn't exceed the amount of information the learner can actively process; 2) drawing attention to important features and patterns; and 3) enabling customization of visualizations and tools to meet the needs of diverse learners.

  19. Relative Fluid Novelty Differentially Alters the Time Course of Limited-Access Ethanol and Water Intake in Selectively Bred High Alcohol Preferring Mice

    PubMed Central

    Linsenbardt, David N.; Boehm, Stephen L.

    2015-01-01

    Background The influence of previous alcohol (ethanol) drinking experience on increasing the rate and amount of future ethanol consumption might be a genetically-regulated phenomenon critical to the development and maintenance of repeated excessive ethanol abuse. We have recently found evidence supporting this view, wherein inbred C57BL/6J (B6) mice develop progressive increases in the rate of binge-ethanol consumption over repeated Drinking-in-the-Dark (DID) ethanol access sessions (i.e. ‘front-loading’). The primary goal of the present study was to evaluate identical parameters in High Alcohol Preferring (HAP) mice to determine if similar temporal alterations in limited-access ethanol drinking develop in a population selected for high ethanol preference/intake under continuous (24hr) access conditions. Methods Using specialized volumetric drinking devices, HAP mice received 14 daily 2 hour DID ethanol or water access sessions. A subset of these mice was then given one day access to the opposite assigned fluid on day 15. Home cage locomotor activity was recorded concomitantly on each day of these studies. The possibility of behavioral/metabolic tolerance was evaluated on day 16 using experimenter administered ethanol. Results The amount of ethanol consumed within the first 15 minutes of access increased markedly over days. However, in contrast to previous observations in B6 mice, ethanol front-loading was also observed on day 15 in mice that only had previous DID experience with water. Furthermore, a decrease in the amount of water consumed within the first 15 minutes of access compared to animals given repeated water access was observed on day 15 in mice with 14 previous days of ethanol access. Conclusions These data further illustrate the complexity and importance of the temporal aspects of limited-access ethanol consumption, and suggest that previous procedural/fluid experience in HAP mice selectively alters the time course of ethanol and water consumption. PMID:25833024

  20. Ontology-based geospatial data query and integration

    USGS Publications Warehouse

    Zhao, T.; Zhang, C.; Wei, M.; Peng, Z.-R.

    2008-01-01

    Geospatial data sharing is an increasingly important subject as large amount of data is produced by a variety of sources, stored in incompatible formats, and accessible through different GIS applications. Past efforts to enable sharing have produced standardized data format such as GML and data access protocols such as Web Feature Service (WFS). While these standards help enabling client applications to gain access to heterogeneous data stored in different formats from diverse sources, the usability of the access is limited due to the lack of data semantics encoded in the WFS feature types. Past research has used ontology languages to describe the semantics of geospatial data but ontology-based queries cannot be applied directly to legacy data stored in databases or shapefiles, or to feature data in WFS services. This paper presents a method to enable ontology query on spatial data available from WFS services and on data stored in databases. We do not create ontology instances explicitly and thus avoid the problems of data replication. Instead, user queries are rewritten to WFS getFeature requests and SQL queries to database. The method also has the benefits of being able to utilize existing tools of databases, WFS, and GML while enabling query based on ontology semantics. ?? 2008 Springer-Verlag Berlin Heidelberg.

  1. Demand access communications for TDRSS users

    NASA Technical Reports Server (NTRS)

    Zillig, David; Weinberg, Aaron; Mcomber, Robert

    1994-01-01

    The Tracking and Data Relay Satellite System (TDRSS) has long been used to provide reliable low and high-data rate relay services between user spacecraft in Earth orbit and the ground. To date, these TDRSS services have been implemented via prior scheduling based upon estimates of user needs and mission event timelines. While this approach may be necessary for large users that require greater amounts of TDRSS resources, TDRSS can potentially offer the planned community of smaller science missions (e.g., the small explorer missions), and other emerging users, the unique opportunity for services on demand. In particular, innovative application of the existing TDRSS Multiple Access (MA) subsystem, with its phased array antenna, could be used to implement true demand access services without modification to either the TDRSS satellites or the user transponder, thereby introducing operational and performance benefits to both the user community and the Space Network. In this paper, candidate implementations of demand access service via the TDRSS MA subsystem are examined in detail. Both forward and return link services are addressed and a combination of qualitative and quantitative assessments are provided. The paper also identifies further areas for investigation in this ongoing activity that is being conducted by GSFC/Code 531 under the NASA Code O Advanced Systems Program.

  2. Extended outlook: description, utilization, and daily applications of cloud technology in radiology.

    PubMed

    Gerard, Perry; Kapadia, Neil; Chang, Patricia T; Acharya, Jay; Seiler, Michael; Lefkovitz, Zvi

    2013-12-01

    The purpose of this article is to discuss the concept of cloud technology, its role in medical applications and radiology, the role of the radiologist in using and accessing these vast resources of information, and privacy concerns and HIPAA compliance strategies. Cloud computing is the delivery of shared resources, software, and information to computers and other devices as a metered service. This technology has a promising role in the sharing of patient medical information and appears to be particularly suited for application in radiology, given the field's inherent need for storage and access to large amounts of data. The radiology cloud has significant strengths, such as providing centralized storage and access, reducing unnecessary repeat radiologic studies, and potentially allowing radiologic second opinions more easily. There are significant cost advantages to cloud computing because of a decreased need for infrastructure and equipment by the institution. Private clouds may be used to ensure secure storage of data and compliance with HIPAA. In choosing a cloud service, there are important aspects, such as disaster recovery plans, uptime, and security audits, that must be considered. Given that the field of radiology has become almost exclusively digital in recent years, the future of secure storage and easy access to imaging studies lies within cloud computing technology.

  3. Big Data challenges and solutions in building the Global Earth Observation System of Systems (GEOSS)

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Nativi, Stefano; Santoro, Mattia; Boldrini, Enrico

    2014-05-01

    The Group on Earth Observation (GEO) is a voluntary partnership of governments and international organizations launched in response to calls for action by the 2002 World Summit on Sustainable Development and by the G8 (Group of Eight) leading industrialized countries. These high-level meetings recognized that international collaboration is essential for exploiting the growing potential of Earth observations to support decision making in an increasingly complex and environmentally stressed world. To this aim is constructing the Global Earth Observation System of Systems (GEOSS) on the basis of a 10-Year Implementation Plan for the period 2005 to 2015 when it will become operational. As a large-scale integrated system handling large datasets as those provided by Earth Observation, GEOSS needs to face several challenges related to big data handling and big data infrastructures management. Referring to the traditional multiple Vs characteristics of Big Data (volume, variety, velocity, veracity and visualization) it is evident how most of them can be found in data handled by GEOSS. In particular, concerning Volume, Earth Observation already generates a large amount of data which can be estimated in the range of Petabytes (1015 bytes), with Exabytes (1018) already targeted. Moreover, the challenge is related not only to the data size, but also to the large amount of datasets (not necessarily having a big size) that systems need to manage. Variety is the other main challenge since datasets coming from different sensors, processed for different use-cases are published with highly heterogeneous metadata and data models, through different service interfaces. Innovative multidisciplinary applications need to access and use those datasets in a harmonized way. Moreover Earth Observation data are growing in size and variety at an exceptionally fast rate and new technologies and applications, including crowdsourcing, will even increase data volume and variety in the next future. The current implementation of GEOSS already addresses several big data challenges. In particular, the brokered architecture adopted in the GEOSS Common Infrastructure with the deployment of the GEO DAB (Discovery and Access Broker) allows to connect more than 20 big EO infrastructures while keeping them autonomous as required by their own mandate and governance. They make more than 60 million of unique resources discoverable and accessible through the GEO Portal. Through the GEO DAB, users are able to seamlessly discover resources provided by different infrastructures, and access them in a harmonized way, collecting datasets from different sources on a Common Environment (same coordinate reference system, spatial subset, format, etc.). Through the GEONETCast system, GEOSS is also providing a solution related to the Velocity challenge, for delivering EO resources to developing countries with low bandwidth connections. Several researches addressing other Big data Vs challenges in GEOSS are on-going, including quality representation for Veracity (as in the FP7 GeoViQua project), brokering big data analytics platforms for Velocity, and support of other EO resources for Variety (such as modelling resources in the Model Web).

  4. Facilitating access to information in large documents with an intelligent hypertext system

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie

    1993-01-01

    Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation) and tested it on the Space Station Freedom requirement documents. The CID system enables integration of various technical documents in a hypertext framework and includes an intelligent context-sensitive indexing and retrieval mechanism. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time.

  5. The Brazilian Science Data Center (BSDC)

    NASA Astrophysics Data System (ADS)

    de Almeida, Ulisses Barres; Bodmann, Benno; Giommi, Paolo; Brandt, Carlos H.

    Astrophysics and Space Science are becoming increasingly characterised by what is now known as “big data”, the bottlenecks for progress partly shifting from data acquisition to “data mining”. Truth is that the amount and rate of data accumulation in many fields already surpasses the local capabilities for its processing and exploitation, and the efficient conversion of scientific data into knowledge is everywhere a challenge. The result is that, to a large extent, isolated data archives risk being progressively likened to “data graveyards”, where the information stored is not reused for scientific work. Responsible and efficient use of these large data-sets means democratising access and extracting the most science possible from it, which in turn signifies improving data accessibility and integration. Improving data processing capabilities is another important issue specific to researchers and computer scientists of each field. The project presented here wishes to exploit the enormous potential opened up by information technology at our age to advance a model for a science data center in astronomy which aims to expand data accessibility and integration to the largest possible extent and with the greatest efficiency for scientific and educational use. Greater access to data means more people producing and benefiting from information, whereas larger integration of related data from different origins means a greater research potential and increased scientific impact. The project of the BSDC is preoccupied, primarily, with providing tools and solutions for the Brazilian astronomical community. It nevertheless capitalizes on extensive international experience, and is developed in full cooperation with the ASI Science Data Center (ASDC), from the Italian Space Agency, granting it an essential ingredient of internationalisation. The BSDC is Virtual Observatory-complient and part of the “Open Universe”, a global initiative built under the auspices of the United Nations.

  6. Pregnancy and birth cohort resources in europe: a large opportunity for aetiological child health research.

    PubMed

    Larsen, Pernille Stemann; Kamper-Jørgensen, Mads; Adamson, Ashley; Barros, Henrique; Bonde, Jens Peter; Brescianini, Sonia; Brophy, Sinead; Casas, Maribel; Charles, Marie-Aline; Devereux, Graham; Eggesbø, Merete; Fantini, Maria Pia; Frey, Urs; Gehring, Ulrike; Grazuleviciene, Regina; Henriksen, Tine Brink; Hertz-Picciotto, Irva; Heude, Barbara; Hryhorczuk, Daniel O; Inskip, Hazel; Jaddoe, Vincent W V; Lawlor, Debbie A; Ludvigsson, Johnny; Kelleher, Cecily; Kiess, Wieland; Koletzko, Berthold; Kuehni, Claudia Elisabeth; Kull, Inger; Kyhl, Henriette Boye; Magnus, Per; Momas, Isabelle; Murray, Dierdre; Pekkanen, Juha; Polanska, Kinga; Porta, Daniela; Poulsen, Gry; Richiardi, Lorenzo; Roeleveld, Nel; Skovgaard, Anne Mette; Sram, Radim J; Strandberg-Larsen, Katrine; Thijs, Carel; Van Eijsden, Manon; Wright, John; Vrijheid, Martine; Andersen, Anne-Marie Nybo

    2013-07-01

    During the past 25 years, many pregnancy and birth cohorts have been established. Each cohort provides unique opportunities for examining associations of early-life exposures with child development and health. However, to fully exploit the large amount of available resources and to facilitate cross-cohort collaboration, it is necessary to have accessible information on each cohort and its individual characteristics. The aim of this work was to provide an overview of European pregnancy and birth cohorts registered in a freely accessible database located at http://www.birthcohorts.net. European pregnancy and birth cohorts initiated in 1980 or later with at least 300 mother-child pairs enrolled during pregnancy or at birth, and with postnatal data, were eligible for inclusion. Eligible cohorts were invited to provide information on the data and biological samples collected, as well as the timing of data collection. In total, 70 cohorts were identified. Of these, 56 fulfilled the inclusion criteria encompassing a total of more than 500,000 live-born European children. The cohorts represented 19 countries with the majority of cohorts located in Northern and Western Europe. Some cohorts were general with multiple aims, whilst others focused on specific health or exposure-related research questions. This work demonstrates a great potential for cross-cohort collaboration addressing important aspects of child health. The web site, http://www.birthcohorts.net, proved to be a useful tool for accessing information on European pregnancy and birth cohorts and their characteristics. © 2013 John Wiley & Sons Ltd.

  7. Interlibrary loan in primary access libraries: challenging the traditional view.

    PubMed

    Dudden, R F; Coldren, S; Condon, J E; Katsh, S; Reiter, C M; Roth, P L

    2000-10-01

    Primary access libraries serve as the foundation of the National Network of Libraries of Medicine (NN/LM) interlibrary loan (ILL) hierarchy, yet few published reports directly address the important role these libraries play in the ILL system. This may reflect the traditional view that small, primary access libraries are largely users of ILL, rather than important contributors to the effectiveness and efficiency of the national ILL system. This study was undertaken to test several commonly held beliefs regarding ILL system use by primary access libraries. Three hypotheses were developed. HI: Colorado and Wyoming primary access libraries comply with the recommended ILL guideline of adhering to a hierarchical structure, emphasizing local borrowing. H2: The closures of two Colorado Council of Medical Librarians (CCML) primary access libraries in 1996 resulted in twenty-three Colorado primary access libraries' borrowing more from their state resource library in 1997. H3: The number of subscriptions held by Colorado and Wyoming primary access libraries is positively correlated with the number of items they loan and negatively correlated with the number of items they borrow. The hypotheses were tested using the 1992 and 1997 DOCLINE and OCLC data of fifty-four health sciences libraries, including fifty primary access libraries, two state resource libraries, and two general academic libraries in Colorado and Wyoming. The ILL data were obtained electronically and analyzed using Microsoft Word 98, Microsoft Excel 98, and JMP 3.2.2. CCML primary access libraries comply with the recommended guideline to emphasize local borrowing by supplying each other with the majority of their ILLs, instead of overburdening libraries located at higher levels in the ILL hierarchy (H1). The closures of two CCML primary access libraries appear to have affected the entire ILL system, resulting in a greater volume of ILL activity for the state resource library and other DOCLINE libraries higher up in the ILL hierarchy and highlighting the contribution made by CCML primary access libraries (H2). CCML primary access libraries borrow and lend in amounts that are proportional to their collection size, rather than overtaxing libraries at higher levels in the ILL hierarchy with large numbers of requests (H3). The main limitations of this study were the small sample size and the use of data collected for another purpose, the CCML ILL survey. The findings suggest that there is little evidence to support several commonly held beliefs regarding ILL system use by primary access libraries. In addition to validating the important contributions made by primary access libraries to the national ILL system, baseline data that can be used to benchmark current practice performance are provided.

  8. Interlibrary loan in primary access libraries: challenging the traditional view

    PubMed Central

    Dudden, Rosalind Farnam; Coldren, Sue; Condon, Joyce Elizabeth; Katsh, Sara; Reiter, Catherine Morton; Roth, Pamela Lynn

    2000-01-01

    Introduction: Primary access libraries serve as the foundation of the National Network of Libraries of Medicine (NN/LM) interlibrary loan (ILL) hierarchy, yet few published reports directly address the important role these libraries play in the ILL system. This may reflect the traditional view that small, primary access libraries are largely users of ILL, rather than important contributors to the effectiveness and efficiency of the national ILL system. Objective: This study was undertaken to test several commonly held beliefs regarding ILL system use by primary access libraries. Hypotheses: Three hypotheses were developed. H1: Colorado and Wyoming primary access libraries comply with the recommended ILL guideline of adhering to a hierarchical structure, emphasizing local borrowing. H2: The closures of two Colorado Council of Medical Librarians (CCML) primary access libraries in 1996 resulted in twenty-three Colorado primary access libraries' borrowing more from their state resource library in 1997. H3: The number of subscriptions held by Colorado and Wyoming primary access libraries is positively correlated with the number of items they loan and negatively correlated with the number of items they borrow. Methods: The hypotheses were tested using the 1992 and 1997 DOCLINE and OCLC data of fifty-four health sciences libraries, including fifty primary access libraries, two state resource libraries, and two general academic libraries in Colorado and Wyoming. The ILL data were obtained electronically and analyzed using Microsoft Word 98, Microsoft Excel 98, and JMP 3.2.2. Results: CCML primary access libraries comply with the recommended guideline to emphasize local borrowing by supplying each other with the majority of their ILLs, instead of overburdening libraries located at higher levels in the ILL hierarchy (H1). The closures of two CCML primary access libraries appear to have affected the entire ILL system, resulting in a greater volume of ILL activity for the state resource library and other DOCLINE libraries higher up in the ILL hierarchy and highlighting the contribution made by CCML primary access libraries (H2). CCML primary access libraries borrow and lend in amounts that are proportional to their collection size, rather than overtaxing libraries at higher levels in the ILL hierarchy with large numbers of requests (H3). Limitations: The main limitations of this study were the small sample size and the use of data collected for another purpose, the CCML ILL survey. Conclusions: The findings suggest that there is little evidence to support several commonly held beliefs regarding ILL system use by primary access libraries. In addition to validating the important contributions made by primary access libraries to the national ILL system, baseline data that can be used to benchmark current practice performance are provided. PMID:11055297

  9. Effect of Ankle Position and Noninvasive Distraction on Arthroscopic Accessibility of the Distal Tibial Plafond.

    PubMed

    Akoh, Craig C; Dibbern, Kevin; Amendola, Annuziato; Sittapairoj, Tinnart; Anderson, Donald D; Phisitkul, Phinit

    2017-10-01

    Osteochondral lesions of the tibial plafond (OLTPs) can lead to chronic ankle pain and disability. It is not known how limited ankle motion or joint distraction affects arthroscopic accessibility of these lesions. The purpose of this study was to determine the effects of different fixed flexion angles and distraction on accessibility of the distal tibial articular surface during anterior and posterior arthroscopy. Fourteen below-knee cadaver specimens underwent anterior and posterior ankle arthroscopy using a 30-degree 2.7-mm arthroscopic camera. Intra-articular working space was measured with a precision of 1 mm using sizing rods. The accessible areas at the plafond were marked under direct visualization at varying fixed ankle flexion positions. Arthroscopic accessibilities were normalized as percent area using a surface laser scan. Statistical analyses were performed to assess the relationship between preoperative ankle range of motion, amount of distraction, arthroscopic approach, and arthroscopic plafond visualization. There was significantly greater accessibility during posterior arthroscopy (73.5%) compared with anterior arthroscopy (51.2%) in the neutral ankle position ( P = .007). There was no difference in accessibility for anterior arthroscopy with increasing level of plantarflexion ( P > .05). Increasing dorsiflexion during posterior arthroscopy significantly reduced ankle accessibility ( P = .028). There was a significant increase in accessibility through the anterior and posterior approach with increasing amount of intra-articular working space (parameter estimates ± SE): anterior = 14.2 ± 3.34 ( P < .01) and posterior = 10.6 ± 3.7 ( P < .05). Frequency data showed that the posterior third of the plafond was completely inaccessible in 33% of ankles during anterior arthroscopy. The frequency of inaccessible anterior plafond during posterior arthroscopy was 12%. Intra-articular working space and arthroscopic accessibility were greater during posterior arthroscopy compared with anterior arthroscopy. Improved accessibility of OLTPs may be achieved from posterior arthroscopy. Arthroscopic accessibility was heavily dependent on the amount of intraoperative joint working space achieved and not on ankle position. OLTPs are often encountered in tandem with talar lesions, and safely achieving intra-articular working space through noninvasive distraction greatly improved arthroscopic accessibility.

  10. Solutions for Mining Distributed Scientific Data

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Pham, L.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.

    2007-12-01

    Researchers at the University of Alabama in Huntsville (UAH) and the Goddard Earth Sciences Data and Information Services Center (GES DISC) are working on approaches and methodologies facilitating the analysis of large amounts of distributed scientific data. Despite the existence of full-featured analysis tools, such as the Algorithm Development and Mining (ADaM) toolkit from UAH, and data repositories, such as the GES DISC, that provide online access to large amounts of data, there remain obstacles to getting the analysis tools and the data together in a workable environment. Does one bring the data to the tools or deploy the tools close to the data? The large size of many current Earth science datasets incurs significant overhead in network transfer for analysis workflows, even with the advanced networking capabilities that are available between many educational and government facilities. The UAH and GES DISC team are developing a capability to define analysis workflows using distributed services and online data resources. We are developing two solutions for this problem that address different analysis scenarios. The first is a Data Center Deployment of the analysis services for large data selections, orchestrated by a remotely defined analysis workflow. The second is a Data Mining Center approach of providing a cohesive analysis solution for smaller subsets of data. The two approaches can be complementary and thus provide flexibility for researchers to exploit the best solution for their data requirements. The Data Center Deployment of the analysis services has been implemented by deploying ADaM web services at the GES DISC so they can access the data directly, without the need of network transfers. Using the Mining Workflow Composer, a user can define an analysis workflow that is then submitted through a Web Services interface to the GES DISC for execution by a processing engine. The workflow definition is composed, maintained and executed at a distributed location, but most of the actual services comprising the workflow are available local to the GES DISC data repository. Additional refinements will ultimately provide a package that is easily implemented and configured at additional data centers for analysis of additional science data sets. Enhancements to the ADaM toolkit allow the staging of distributed data wherever the services are deployed, to support a Data Mining Center that can provide additional computational resources, large storage of output, easier addition and updates to available services, and access to data from multiple repositories. The Data Mining Center case provides researchers more flexibility to quickly try different workflow configurations and refine the process, using smaller amounts of data that may likely be transferred from distributed online repositories. This environment is sufficient for some analyses, but can also be used as an initial sandbox to test and refine a solution before staging the execution at a Data Center Deployment. Detection of airborne dust both over water and land in MODIS imagery using mining services for both solutions will be presented. The dust detection is just one possible example of the mining and analysis capabilities the proposed mining services solutions will provide to the science community. More information about the available services and the current status of this project is available at http://www.itsc.uah.edu/mws/

  11. Storage hierarchies and multimedia file servers

    NASA Astrophysics Data System (ADS)

    Wullert, John R.; Von Lehman, Ann C.

    1994-11-01

    A variety of multimedia and video services have been proposed and investigated, including services such as video-on-demand, distance learning, home shopping, and telecommuting. These services tend to rely on high-datarate communications and most have a corresponding need for a large amount of storage with high data rates and short access times. For some services, it has been predicted that the cost of storage will be significant compared to the cost of switching and transmission in a broadband network. This paper discusses architectures of a variety of multimedia and video services, with an emphasis on the relationship between technological considerations of the storage heirarchy to support these services and service architectures.

  12. The Matsu Wheel: A Cloud-Based Framework for Efficient Analysis and Reanalysis of Earth Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Patterson, Maria T.; Anderson, Nicholas; Bennett, Collin; Bruggemann, Jacob; Grossman, Robert L.; Handy, Matthew; Ly, Vuong; Mandl, Daniel J.; Pederson, Shane; Pivarski, James; hide

    2016-01-01

    Project Matsu is a collaboration between the Open Commons Consortium and NASA focused on developing open source technology for cloud-based processing of Earth satellite imagery with practical applications to aid in natural disaster detection and relief. Project Matsu has developed an open source cloud-based infrastructure to process, analyze, and reanalyze large collections of hyperspectral satellite image data using OpenStack, Hadoop, MapReduce and related technologies. We describe a framework for efficient analysis of large amounts of data called the Matsu "Wheel." The Matsu Wheel is currently used to process incoming hyperspectral satellite data produced daily by NASA's Earth Observing-1 (EO-1) satellite. The framework allows batches of analytics, scanning for new data, to be applied to data as it flows in. In the Matsu Wheel, the data only need to be accessed and preprocessed once, regardless of the number or types of analytics, which can easily be slotted into the existing framework. The Matsu Wheel system provides a significantly more efficient use of computational resources over alternative methods when the data are large, have high-volume throughput, may require heavy preprocessing, and are typically used for many types of analysis. We also describe our preliminary Wheel analytics, including an anomaly detector for rare spectral signatures or thermal anomalies in hyperspectral data and a land cover classifier that can be used for water and flood detection. Each of these analytics can generate visual reports accessible via the web for the public and interested decision makers. The result products of the analytics are also made accessible through an Open Geospatial Compliant (OGC)-compliant Web Map Service (WMS) for further distribution. The Matsu Wheel allows many shared data services to be performed together to efficiently use resources for processing hyperspectral satellite image data and other, e.g., large environmental datasets that may be analyzed for many purposes.

  13. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  14. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    NASA Astrophysics Data System (ADS)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  15. 32 CFR 3.8 - DoD access to records policy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... for defined payable milestones, with no provision for financial or cost reporting that would be a... necessary to verify statutory cost share or to verify amounts generated from financial or cost records that... General access. (1) Fixed-price type OT agreements. (i) General—DoD access to records is not generally...

  16. 32 CFR 3.8 - DoD access to records policy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for defined payable milestones, with no provision for financial or cost reporting that would be a... necessary to verify statutory cost share or to verify amounts generated from financial or cost records that... General access. (1) Fixed-price type OT agreements. (i) General—DoD access to records is not generally...

  17. A new database sub-system for grain-size analysis

    NASA Astrophysics Data System (ADS)

    Suckow, Axel

    2013-04-01

    Detailed grain-size analyses of large depth profiles for palaeoclimate studies create large amounts of data. For instance (Novothny et al., 2011) presented a depth profile of grain-size analyses with 2 cm resolution and a total depth of more than 15 m, where each sample was measured with 5 repetitions on a Beckman Coulter LS13320 with 116 channels. This adds up to a total of more than four million numbers. Such amounts of data are not easily post-processed by spreadsheets or standard software; also MS Access databases would face serious performance problems. The poster describes a database sub-system dedicated to grain-size analyses. It expands the LabData database and laboratory management system published by Suckow and Dumke (2001). This compatibility with a very flexible database system provides ease to import the grain-size data, as well as the overall infrastructure of also storing geographic context and the ability to organize content like comprising several samples into one set or project. It also allows easy export and direct plot generation of final data in MS Excel. The sub-system allows automated import of raw data from the Beckman Coulter LS13320 Laser Diffraction Particle Size Analyzer. During post processing MS Excel is used as a data display, but no number crunching is implemented in Excel. Raw grain size spectra can be exported and controlled as Number- Surface- and Volume-fractions, while single spectra can be locked for further post-processing. From the spectra the usual statistical values (i.e. mean, median) can be computed as well as fractions larger than a grain size, smaller than a grain size, fractions between any two grain sizes or any ratio of such values. These deduced values can be easily exported into Excel for one or more depth profiles. However, such a reprocessing for large amounts of data also allows new display possibilities: normally depth profiles of grain-size data are displayed only with summarized parameters like the clay content, sand content, etc., which always only displays part of the available information at each depth. Alternatively, full spectra were displayed at one depth. The new software now allows to display the whole grain-size spectrum at each depth in a three dimensional display. LabData and the grain-size subsystem are based on MS Access as front-end and MS SQL Server as back-end database systems. The SQL code for the data model, SQL server procedures and triggers and the MS Access basic code for the front end are public domain code, published under the GNU GPL license agreement and are available free of charge. References: Novothny, Á., Frechen, M., Horváth, E., Wacha, L., Rolf, C., 2011. Investigating the penultimate and last glacial cycles of the Sütt dating, high-resolution grain size, and magnetic susceptibility data. Quaternary International 234, 75-85. Suckow, A., Dumke, I., 2001. A database system for geochemical, isotope hydrological and geochronological laboratories. Radiocarbon 43, 325-337.

  18. Migration from new-accession countries and duration expectancy in the EU-15: 2002–2008

    PubMed Central

    DeWaard, Jack; Ha, Jasmine Trang; Raymer, James; Wiśniowski, Arkadiusz

    2016-01-01

    European Union (EU) enlargements in 2004 and 2007 were accompanied by increased migration from new-accession to established-member (EU-15) countries. The impacts of these flows depend, in part, on the amount of time that persons from the former countries live in the latter over the life course. In this paper, we develop period estimates of duration expectancy in EU-15 countries among persons from new-accession countries. Using a newly developed set of harmonised Bayesian estimates of migration flows each year from 2002 to 2008 from the Integrated Modelling of European Migration (IMEM) Project, we exploit period age patterns of country-to-country migration and mortality to summarize the average number of years that persons from new-accession countries could be expected to live in EU-15 countries over the life course. In general, the results show that the amount of time that persons from new-accession countries could be expected to live in the EU-15 nearly doubled after 2004. PMID:28286353

  19. The textual characteristics of traditional and Open Access scientific journals are similar.

    PubMed

    Verspoor, Karin; Cohen, K Bretonnel; Hunter, Lawrence

    2009-06-15

    Recent years have seen an increased amount of natural language processing (NLP) work on full text biomedical journal publications. Much of this work is done with Open Access journal articles. Such work assumes that Open Access articles are representative of biomedical publications in general and that methods developed for analysis of Open Access full text publications will generalize to the biomedical literature as a whole. If this assumption is wrong, the cost to the community will be large, including not just wasted resources, but also flawed science. This paper examines that assumption. We collected two sets of documents, one consisting only of Open Access publications and the other consisting only of traditional journal publications. We examined them for differences in surface linguistic structures that have obvious consequences for the ease or difficulty of natural language processing and for differences in semantic content as reflected in lexical items. Regarding surface linguistic structures, we examined the incidence of conjunctions, negation, passives, and pronominal anaphora, and found that the two collections did not differ. We also examined the distribution of sentence lengths and found that both collections were characterized by the same mode. Regarding lexical items, we found that the Kullback-Leibler divergence between the two collections was low, and was lower than the divergence between either collection and a reference corpus. Where small differences did exist, log likelihood analysis showed that they were primarily in the area of formatting and in specific named entities. We did not find structural or semantic differences between the Open Access and traditional journal collections.

  20. The textual characteristics of traditional and Open Access scientific journals are similar

    PubMed Central

    Verspoor, Karin; Cohen, K Bretonnel; Hunter, Lawrence

    2009-01-01

    Background Recent years have seen an increased amount of natural language processing (NLP) work on full text biomedical journal publications. Much of this work is done with Open Access journal articles. Such work assumes that Open Access articles are representative of biomedical publications in general and that methods developed for analysis of Open Access full text publications will generalize to the biomedical literature as a whole. If this assumption is wrong, the cost to the community will be large, including not just wasted resources, but also flawed science. This paper examines that assumption. Results We collected two sets of documents, one consisting only of Open Access publications and the other consisting only of traditional journal publications. We examined them for differences in surface linguistic structures that have obvious consequences for the ease or difficulty of natural language processing and for differences in semantic content as reflected in lexical items. Regarding surface linguistic structures, we examined the incidence of conjunctions, negation, passives, and pronominal anaphora, and found that the two collections did not differ. We also examined the distribution of sentence lengths and found that both collections were characterized by the same mode. Regarding lexical items, we found that the Kullback-Leibler divergence between the two collections was low, and was lower than the divergence between either collection and a reference corpus. Where small differences did exist, log likelihood analysis showed that they were primarily in the area of formatting and in specific named entities. Conclusion We did not find structural or semantic differences between the Open Access and traditional journal collections. PMID:19527520

  1. Interactive Scripting for Analysis and Visualization of Arbitrarily Large, Disparately Located Climate Data Ensembles Using a Progressive Runtime Server

    NASA Astrophysics Data System (ADS)

    Christensen, C.; Summa, B.; Scorzelli, G.; Lee, J. W.; Venkat, A.; Bremer, P. T.; Pascucci, V.

    2017-12-01

    Massive datasets are becoming more common due to increasingly detailed simulations and higher resolution acquisition devices. Yet accessing and processing these huge data collections for scientific analysis is still a significant challenge. Solutions that rely on extensive data transfers are increasingly untenable and often impossible due to lack of sufficient storage at the client side as well as insufficient bandwidth to conduct such large transfers, that in some cases could entail petabytes of data. Large-scale remote computing resources can be useful, but utilizing such systems typically entails some form of offline batch processing with long delays, data replications, and substantial cost for any mistakes. Both types of workflows can severely limit the flexible exploration and rapid evaluation of new hypotheses that are crucial to the scientific process and thereby impede scientific discovery. In order to facilitate interactivity in both analysis and visualization of these massive data ensembles, we introduce a dynamic runtime system suitable for progressive computation and interactive visualization of arbitrarily large, disparately located spatiotemporal datasets. Our system includes an embedded domain-specific language (EDSL) that allows users to express a wide range of data analysis operations in a simple and abstract manner. The underlying runtime system transparently resolves issues such as remote data access and resampling while at the same time maintaining interactivity through progressive and interruptible processing. Computations involving large amounts of data can be performed remotely in an incremental fashion that dramatically reduces data movement, while the client receives updates progressively thereby remaining robust to fluctuating network latency or limited bandwidth. This system facilitates interactive, incremental analysis and visualization of massive remote datasets up to petabytes in size. Our system is now available for general use in the community through both docker and anaconda.

  2. How Much Water Trees Access and How It Determines Forest Response to Drought

    NASA Astrophysics Data System (ADS)

    Berdanier, A. B.; Clark, J. S.

    2015-12-01

    Forests are transformed by drought as water availability drops below levels where trees of different sizes and species can maintain productivity and survive. Physiological studies have provided detailed understanding of how species differences affect drought vulnerability but they offer almost no insights about the amount of water different trees can access beyond general statements about rooting depth. While canopy architecture provides strong evidence for light availability aboveground, belowground moisture availability remains essentially unknown. For example, do larger trees always have greater access to soil moisture? In temperate mixed forests, the ability to access a large soil moisture pool could minimize damage during drought events and facilitate post-drought recovery, potentially at the expense of neighboring trees. We show that the pool of accessible soil moisture can be estimated for trees with data on whole-plant transpiration and that this data can be used to predict water availability for forest stands. We estimate soil water availability with a Bayesian state-space model based on a simple water balance, where cumulative depressions in water use below potential transpiration indicate soil resource depletion. We compare trees of different sizes and species, extend these findings to the entire stand, and connect them to our recent research showing that tree survival after drought depends on post-drought growth recovery and local moisture availability. These results can be used to predict competitive abilities for soil water, understand ecohydrological variation within stands, and identify trees that are at risk of damage from future drought events.

  3. Relative fluid novelty differentially alters the time course of limited-access ethanol and water intake in selectively bred high-alcohol-preferring mice.

    PubMed

    Linsenbardt, David N; Boehm, Stephen L

    2015-04-01

    The influence of previous alcohol (ethanol [EtOH])-drinking experience on increasing the rate and amount of future EtOH consumption might be a genetically regulated phenomenon critical to the development and maintenance of repeated excessive EtOH abuse. We have recently found evidence supporting this view, wherein inbred C57BL/6J (B6) mice develop progressive increases in the rate of binge EtOH consumption over repeated drinking-in-the-dark (DID) EtOH access sessions (i.e., "front loading"). The primary goal of this study was to evaluate identical parameters in high-alcohol-preferring (HAP) mice to determine whether similar temporal alterations in limited-access EtOH drinking develop in a population selected for high EtOH preference/intake under continuous (24-hour) access conditions. Using specialized volumetric drinking devices, HAP mice received 14 daily 2-hour DID EtOH or water access sessions. A subset of these mice was then given 1 day access to the opposite assigned fluid on day 15. Home cage locomotor activity was recorded concomitantly on each day of these studies. The possibility of behavioral/metabolic tolerance was evaluated on day 16 using experimenter-administered EtOH. The amount of EtOH consumed within the first 15 minutes of access increased markedly over days. However, in contrast to previous observations in B6 mice, EtOH front loading was also observed on day 15 in mice that only had previous DID experience with water. Furthermore, a decrease in the amount of water consumed within the first 15 minutes of access compared to animals given repeated water access was observed on day 15 in mice with 14 previous days of EtOH access. These data further illustrate the complexity and importance of the temporal aspects of limited-access EtOH consumption and suggest that previous procedural/fluid experience in HAP mice selectively alters the time course of EtOH and water consumption. Copyright © 2015 by the Research Society on Alcoholism.

  4. CDPP Tools in the IMPEx infrastructure

    NASA Astrophysics Data System (ADS)

    Gangloff, Michel; Génot, Vincent; Bourrel, Nataliya; Hess, Sébastien; Khodachenko, Maxim; Modolo, Ronan; Kallio, Esa; Alexeev, Igor; Al-Ubaidi, Tarek; Cecconi, Baptiste; André, Nicolas; Budnik, Elena; Bouchemit, Myriam; Dufourg, Nicolas; Beigbeder, Laurent

    2014-05-01

    The CDPP (Centre de Données de la Physique des Plasmas, http://cdpp.eu/), the French data center for plasma physics, is engaged for more than a decade in the archiving and dissemination of plasma data products from space missions and ground observatories. Besides these activities, the CDPP developed services like AMDA (http://amda.cdpp.eu/) which enables in depth analysis of large amount of data through dedicated functionalities such as: visualization, conditional search, cataloguing, and 3DView (http://3dview.cdpp.eu/) which provides immersive visualisations in planetary environments and is further developed to include simulation and observational data. Both tools implement the IMPEx protocol (http://impexfp7.oeaw.ac.at/) to give access to outputs of simulation runs and models in planetary sciences from several providers like LATMOS, FMI , SINP; prototypes have also been built to access some UCLA and CCMC simulations. These tools and their interaction will be presented together with the IMPEx simulation data model (http://impex.latmos.ipsl.fr/tools/DataModel.htm) used for the interface to model databases.

  5. HEADSS up: Adolescents and the Internet

    PubMed Central

    Norris, Mark L

    2007-01-01

    INTRODUCTION The Internet contains a tremendous amount of unregulated information. Internet use gives adolescents access to a wide variety of information and communication devices, and may be associated with certain risks. OBJECTIVE To provide health care professionals with information on adolescent Internet use and its associated risks. DISCUSSION Ninety-four per cent of Canadian youth surveyed nationwide in 2005 reported having Internet access in their homes. Parents and health care providers need to educate themselves on issues of Internet safety. The divergent means by which adolescents are using the Internet and the inherent risks associated with unsupervised and uneducated use are addressed. Parents and teenagers are provided with tips for safe Internet use, and health care providers are offered sample questions pertaining to adolescent Internet use. SUMMARY A large proportion of adolescents use the Internet daily. Studies examining the risks of online exposure in this age group are evolving. Awareness of the range of applications and information available online will facilitate counselling on appropriate Internet use. PMID:19030361

  6. An open source, web based, simple solution for seismic data dissemination and collaborative research

    NASA Astrophysics Data System (ADS)

    Diviacco, Paolo

    2005-06-01

    Collaborative research and data dissemination in the field of geophysical exploration need network tools that can access large amounts of data from anywhere using any PC or workstation. Simple solutions based on a combination of Open Source software can be developed to address such requests, exploiting the possibilities offered by the web technologies, and at the same time avoiding the costs and inflexibility of commercial systems. A viable solution consists of MySQL for data storage and retrieval, CWP/SU and GMT for data visualisation and a scripting layer driven by PHP that allows users to access the system via an Apache web server. In the light of the experience building the on-line archive of seismic data of the Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), we describe the solutions and the methods adopted, with a view to stimulate both the attitude of network collaborative research of other institutions similar to ours, and the development of different applications.

  7. Novel, Web-based, information-exploration approach for improving operating room logistics and system processes.

    PubMed

    Nagy, Paul G; Konewko, Ramon; Warnock, Max; Bernstein, Wendy; Seagull, Jacob; Xiao, Yan; George, Ivan; Park, Adrian

    2008-03-01

    Routine clinical information systems now have the ability to gather large amounts of data that surgical managers can access to create a seamless and proactive approach to streamlining operations and minimizing delays. The challenge lies in aggregating and displaying these data in an easily accessible format that provides useful, timely information on current operations. A Web-based, graphical dashboard is described in this study, which can be used to interpret clinical operational data, allow managers to see trends in data, and help identify inefficiencies that were not apparent with more traditional, paper-based approaches. The dashboard provides a visual decision support tool that assists managers in pinpointing areas for continuous quality improvement. The limitations of paper-based techniques, the development of the automated display system, and key performance indicators in analyzing aggregate delays, time, specialties, and teamwork are reviewed. Strengths, weaknesses, opportunities, and threats associated with implementing such a program in the perioperative environment are summarized.

  8. Recombinant protein expression for structural biology in HEK 293F suspension cells: a novel and accessible approach.

    PubMed

    Portolano, Nicola; Watson, Peter J; Fairall, Louise; Millard, Christopher J; Milano, Charles P; Song, Yun; Cowley, Shaun M; Schwabe, John W R

    2014-10-16

    The expression and purification of large amounts of recombinant protein complexes is an essential requirement for structural biology studies. For over two decades, prokaryotic expression systems such as E. coli have dominated the scientific literature over costly and less efficient eukaryotic cell lines. Despite the clear advantage in terms of yields and costs of expressing recombinant proteins in bacteria, the absence of specific co-factors, chaperones and post-translational modifications may cause loss of function, mis-folding and can disrupt protein-protein interactions of certain eukaryotic multi-subunit complexes, surface receptors and secreted proteins. The use of mammalian cell expression systems can address these drawbacks since they provide a eukaryotic expression environment. However, low protein yields and high costs of such methods have until recently limited their use for structural biology. Here we describe a simple and accessible method for expressing and purifying milligram quantities of protein by performing transient transfections of suspension grown HEK (Human Embryonic Kidney) 293 F cells.

  9. Human intervention study to investigate the intestinal accessibility and bioavailability of anthocyanins from bilberries.

    PubMed

    Mueller, Dolores; Jung, Kathrin; Winter, Manuel; Rogoll, Dorothee; Melcher, Ralph; Richling, Elke

    2017-09-15

    We investigated the importance of the large intestine on the bioavailability of anthocyanins from bilberries in humans with/without a colon. Low bioavailability of anthocyanins in plasma and urine was observed in the frame of this study. Anthocyanins reached the circulation mainly as glucuronides. Analysis of ileal effluents (at end of small intestine) demonstrated that 30% of ingested anthocyanins were stable during 8h passage through the upper intestine. Only 20% degradants were formed and mostly intact anthocyanins were absorbed from the small intestine. Higher amounts of degradants than anthocyanins reached the circulation after bilberry extract consumption in both groups of subjects. Comparison of the bioavailability of anthocyanins in healthy subjects versus ileostomists revealed substantially higher amounts of anthocyanins and degradants in the plasma/urine of subjects with an intact gut. The results suggested that the colon is a significant site for absorption of bioactive components such as anthocyanins and their degradation products. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Data Prospecting Framework - a new approach to explore "big data" in Earth Science

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Rushing, J.; Lin, A.; Kuo, K.

    2012-12-01

    Due to advances in sensors, computation and storage, cost and effort required to produce large datasets have been significantly reduced. As a result, we are seeing a proliferation of large-scale data sets being assembled in almost every science field, especially in geosciences. Opportunities to exploit the "big data" are enormous as new hypotheses can be generated by combining and analyzing large amounts of data. However, such a data-driven approach to science discovery assumes that scientists can find and isolate relevant subsets from vast amounts of available data. Current Earth Science data systems only provide data discovery through simple metadata and keyword-based searches and are not designed to support data exploration capabilities based on the actual content. Consequently, scientists often find themselves downloading large volumes of data, struggling with large amounts of storage and learning new analysis technologies that will help them separate the wheat from the chaff. New mechanisms of data exploration are needed to help scientists discover the relevant subsets We present data prospecting, a new content-based data analysis paradigm to support data-intensive science. Data prospecting allows the researchers to explore big data in determining and isolating data subsets for further analysis. This is akin to geo-prospecting in which mineral sites of interest are determined over the landscape through screening methods. The resulting "data prospects" only provide an interaction with and feel for the data through first-look analytics; the researchers would still have to download the relevant datasets and analyze them deeply using their favorite analytical tools to determine if the datasets will yield new hypotheses. Data prospecting combines two traditional categories of data analysis, data exploration and data mining within the discovery step. Data exploration utilizes manual/interactive methods for data analysis such as standard statistical analysis and visualization, usually on small datasets. On the other hand, data mining utilizes automated algorithms to extract useful information. Humans guide these automated algorithms and specify algorithm parameters (training samples, clustering size, etc.). Data Prospecting combines these two approaches using high performance computing and the new techniques for efficient distributed file access.

  11. What Can OpenEI Do For You?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-12-10

    Open Energy Information (OpenEI) is an open source web platform—similar to the one used by Wikipedia—developed by the US Department of Energy (DOE) and the National Renewable Energy Laboratory (NREL) to make the large amounts of energy-related data and information more easily searched, accessed, and used both by people and automated machine processes. Built utilizing the standards and practices of the Linked Open Data community, the OpenEI platform is much more robust and powerful than typical web sites and databases. As an open platform, all users can search, edit, add, and access data in OpenEI for free. The user communitymore » contributes the content and ensures its accuracy and relevance; as the community expands, so does the content's comprehensiveness and quality. The data are structured and tagged with descriptors to enable cross-linking among related data sets, advanced search functionality, and consistent, usable formatting. Data input protocols and quality standards help ensure the content is structured and described properly and derived from a credible source. Although DOE/NREL is developing OpenEI and seeding it with initial data, it is designed to become a true community model with millions of users, a large core of active contributors, and numerous sponsors.« less

  12. Handwashing in 51 Countries: Analysis of Proxy Measures of Handwashing Behavior in Multiple Indicator Cluster Surveys and Demographic and Health Surveys, 2010-2013.

    PubMed

    Kumar, Swapna; Loughnan, Libbet; Luyendijk, Rolf; Hernandez, Orlando; Weinger, Merri; Arnold, Fred; Ram, Pavani K

    2017-08-01

    In 2009, a common set of questions addressing handwashing behavior was introduced into nationally representative Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS), providing large amounts of comparable data from numerous countries worldwide. The objective of this analysis is to describe global handwashing patterns using two proxy indicators for handwashing behavior from 51 DHS and MICS surveys conducted in 2010-2013: availability of soap anywhere in the dwelling and access to a handwashing place with soap and water. Data were also examined across geographic regions, wealth quintiles, and rural versus urban settings. We found large disparities for both indicators across regions, and even among countries within the same World Health Organization region. Within countries, households in lower wealth quintiles and in rural areas were less likely to have soap anywhere in the dwelling and at designated handwashing locations than households in higher wealth quintiles and urban areas. In addition, disparities existed among various geographic regions within countries. This analysis demonstrates the need to promote access to handwashing materials and placement at handwashing locations in the dwelling, particularly in poorer, rural areas where children are more vulnerable to handwashing-preventable syndromes such as pneumonia and diarrhea.

  13. Preserving and vouchering butterflies and moths for large-scale museum-based molecular research

    PubMed Central

    Epstein, Samantha W.; Mitter, Kim; Hamilton, Chris A.; Plotkin, David; Mitter, Charles

    2016-01-01

    Butterflies and moths (Lepidoptera) comprise significant portions of the world’s natural history collections, but a standardized tissue preservation protocol for molecular research is largely lacking. Lepidoptera have traditionally been spread on mounting boards to display wing patterns and colors, which are often important for species identification. Many molecular phylogenetic studies have used legs from pinned specimens as the primary source for DNA in order to preserve a morphological voucher, but the amount of available tissue is often limited. Preserving an entire specimen in a cryogenic freezer is ideal for DNA preservation, but without an easily accessible voucher it can make specimen identification, verification, and morphological work difficult. Here we present a procedure that creates accessible and easily visualized “wing vouchers” of individual Lepidoptera specimens, and preserves the remainder of the insect in a cryogenic freezer for molecular research. Wings are preserved in protective holders so that both dorsal and ventral patterns and colors can be easily viewed without further damage. Our wing vouchering system has been implemented at the University of Maryland (AToL Lep Collection) and the University of Florida (Florida Museum of Natural History, McGuire Center of Lepidoptera and Biodiversity), which are among two of the largest Lepidoptera molecular collections in the world. PMID:27366654

  14. Handwashing in 51 Countries: Analysis of Proxy Measures of Handwashing Behavior in Multiple Indicator Cluster Surveys and Demographic and Health Surveys, 2010–2013

    PubMed Central

    Kumar, Swapna; Loughnan, Libbet; Luyendijk, Rolf; Hernandez, Orlando; Weinger, Merri; Arnold, Fred; Ram, Pavani K.

    2017-01-01

    Abstract. In 2009, a common set of questions addressing handwashing behavior was introduced into nationally representative Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS), providing large amounts of comparable data from numerous countries worldwide. The objective of this analysis is to describe global handwashing patterns using two proxy indicators for handwashing behavior from 51 DHS and MICS surveys conducted in 2010–2013: availability of soap anywhere in the dwelling and access to a handwashing place with soap and water. Data were also examined across geographic regions, wealth quintiles, and rural versus urban settings. We found large disparities for both indicators across regions, and even among countries within the same World Health Organization region. Within countries, households in lower wealth quintiles and in rural areas were less likely to have soap anywhere in the dwelling and at designated handwashing locations than households in higher wealth quintiles and urban areas. In addition, disparities existed among various geographic regions within countries. This analysis demonstrates the need to promote access to handwashing materials and placement at handwashing locations in the dwelling, particularly in poorer, rural areas where children are more vulnerable to handwashing-preventable syndromes such as pneumonia and diarrhea. PMID:28722572

  15. What Can OpenEI Do For You?

    ScienceCinema

    None

    2018-02-06

    Open Energy Information (OpenEI) is an open source web platform—similar to the one used by Wikipedia—developed by the US Department of Energy (DOE) and the National Renewable Energy Laboratory (NREL) to make the large amounts of energy-related data and information more easily searched, accessed, and used both by people and automated machine processes. Built utilizing the standards and practices of the Linked Open Data community, the OpenEI platform is much more robust and powerful than typical web sites and databases. As an open platform, all users can search, edit, add, and access data in OpenEI for free. The user community contributes the content and ensures its accuracy and relevance; as the community expands, so does the content's comprehensiveness and quality. The data are structured and tagged with descriptors to enable cross-linking among related data sets, advanced search functionality, and consistent, usable formatting. Data input protocols and quality standards help ensure the content is structured and described properly and derived from a credible source. Although DOE/NREL is developing OpenEI and seeding it with initial data, it is designed to become a true community model with millions of users, a large core of active contributors, and numerous sponsors.

  16. Information Fusion of Conflicting Input Data.

    PubMed

    Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael

    2016-10-29

    Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μ BalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible.

  17. Information Fusion of Conflicting Input Data

    PubMed Central

    Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael

    2016-01-01

    Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μBalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible. PMID:27801874

  18. A tool for optimization of the production and user analysis on the Grid, C. Grigoras for the ALICE Collaboration

    NASA Astrophysics Data System (ADS)

    Grigoras, Costin; Carminati, Federico; Vladimirovna Datskova, Olga; Schreiner, Steffen; Lee, Sehoon; Zhu, Jianlin; Gheata, Mihaela; Gheata, Andrei; Saiz, Pablo; Betev, Latchezar; Furano, Fabrizio; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Bagnasco, Stefano; Peters, Andreas Joachim; Saiz Santos, Maria Dolores

    2011-12-01

    With the LHC and ALICE entering a full operation and production modes, the amount of Simulation and RAW data processing and end user analysis computational tasks are increasing. The efficient management of all these tasks, all of which have large differences in lifecycle, amounts of processed data and methods to analyze the end result, required the development and deployment of new tools in addition to the already existing Grid infrastructure. To facilitate the management of the large scale simulation and raw data reconstruction tasks, ALICE has developed a production framework called a Lightweight Production Manager (LPM). The LPM is automatically submitting jobs to the Grid based on triggers and conditions, for example after a physics run completion. It follows the evolution of the job and publishes the results on the web for worldwide access by the ALICE physicists. This framework is tightly integrated with the ALICE Grid framework AliEn. In addition to the publication of the job status, LPM is also allowing a fully authenticated interface to the AliEn Grid catalogue, to browse and download files, and in the near future will provide simple types of data analysis through ROOT plugins. The framework is also being extended to allow management of end user jobs.

  19. Coherent operation of detector systems and their readout electronics in a complex experiment control environment

    NASA Astrophysics Data System (ADS)

    Koestner, Stefan

    2009-09-01

    With the increasing size and degree of complexity of today's experiments in high energy physics the required amount of work and complexity to integrate a complete subdetector into an experiment control system is often underestimated. We report here on the layered software structure and protocols used by the LHCb experiment to control its detectors and readout boards. The experiment control system of LHCb is based on the commercial SCADA system PVSS II. Readout boards which are outside the radiation area are accessed via embedded credit card sized PCs which are connected to a large local area network. The SPECS protocol is used for control of the front end electronics. Finite state machines are introduced to facilitate the control of a large number of electronic devices and to model the whole experiment at the level of an expert system.

  20. RAID Disk Arrays for High Bandwidth Applications

    NASA Technical Reports Server (NTRS)

    Moren, Bill

    1996-01-01

    High bandwidth applications require large amounts of data transferred to/from storage devices at extremely high data rates. Further, these applications often are 'real time' in which access to the storage device must take place on the schedule of the data source, not the storage. A good example is a satellite downlink - the volume of data is quite large and the data rates quite high (dozens of MB/sec). Further, a telemetry downlink must take place while the satellite is overhead. A storage technology which is ideally suited to these types of applications is redundant arrays of independent discs (RAID). Raid storage technology, while offering differing methodologies for a variety of applications, supports the performance and redundancy required in real-time applications. Of the various RAID levels, RAID-3 is the only one which provides high data transfer rates under all operating conditions, including after a drive failure.

  1. GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering.

    PubMed

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2016-01-01

    Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads.

  2. Sequencing of the large dsDNA genome of Oryctes rhinoceros nudivirus using multiple displacement amplification of nanogram amounts of virus DNA.

    PubMed

    Wang, Yongjie; Kleespies, Regina G; Ramle, Moslim B; Jehle, Johannes A

    2008-09-01

    The genomic sequence analysis of many large dsDNA viruses is hampered by the lack of enough sample materials. Here, we report a whole genome amplification of the Oryctes rhinoceros nudivirus (OrNV) isolate Ma07 starting from as few as about 10 ng of purified viral DNA by application of phi29 DNA polymerase- and exonuclease-resistant random hexamer-based multiple displacement amplification (MDA) method. About 60 microg of high molecular weight DNA with fragment sizes of up to 25 kbp was amplified. A genomic DNA clone library was generated using the product DNA. After 8-fold sequencing coverage, the 127,615 bp of OrNV whole genome was sequenced successfully. The results demonstrate that the MDA-based whole genome amplification enables rapid access to genomic information from exiguous virus samples.

  3. Quantum ensembles of quantum classifiers.

    PubMed

    Schuld, Maria; Petruccione, Francesco

    2018-02-09

    Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.

  4. Using Browser Notebooks to Analyse Big Atmospheric Data-sets in the Cloud

    NASA Astrophysics Data System (ADS)

    Robinson, N.; Tomlinson, J.; Arribas, A.; Prudden, R.

    2016-12-01

    We are presenting an account of our experience building an ecosystem for the analysis of big atmospheric data-sets. By using modern technologies we have developed a prototype platform which is scaleable and capable of analysing very large atmospheric datasets. We tested different big-data ecosystems such as Hadoop MapReduce, Spark and Dask, in order to find the one which was best suited for analysis of multidimensional binary data such as NetCDF. We make extensive use of infrastructure-as-code and containerisation to provide a platform which is reusable, and which can scale to accommodate changes in demand. We make this platform readily accessible using browser based notebooks. As a result, analysts with minimal technology experience can, in tens of lines of Python, make interactive data-visualisation web pages, which can analyse very large amounts of data using cutting edge big-data technology

  5. ""Moby-Dick" Is My Favorite:" Evaluating a Cognitively Accessible Portable Reading System for Audiobooks for Individuals with Intellectual Disability

    ERIC Educational Resources Information Center

    Davies, Daniel K.; Stock, Steven E.; King, Larry R.; Wehmeyer, Michael L.

    2008-01-01

    Significant barriers exist for individuals with intellectual disability to independently access print-based content. It is regrettable that, while the amount of content now available electronically increases, tools to access these materials have not been developed with individuals with intellectual disability in mind. This article reports the…

  6. Forming-free and self-rectifying resistive switching of the simple Pt/TaOx/n-Si structure for access device-free high-density memory application

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Zeng, Fei; Li, Fan; Wang, Minjuan; Mao, Haijun; Wang, Guangyue; Song, Cheng; Pan, Feng

    2015-03-01

    The search for self-rectifying resistive memories has aroused great attention due to their potential in high-density memory applications without additional access devices. Here we report the forming-free and self-rectifying bipolar resistive switching behavior of a simple Pt/TaOx/n-Si tri-layer structure. The forming-free phenomenon is attributed to the generation of a large amount of oxygen vacancies, in a TaOx region that is in close proximity to the TaOx/n-Si interface, via out-diffusion of oxygen ions from TaOx to n-Si. A maximum rectification ratio of ~6 × 102 is obtained when the Pt/TaOx/n-Si devices stay in a low resistance state, which originates from the existence of a Schottky barrier between the formed oxygen vacancy filament and the n-Si electrode. More importantly, numerical simulation reveals that the self-rectifying behavior itself can guarantee a maximum crossbar size of 212 × 212 (~44 kbit) on the premise of 10% read margin. Moreover, satisfactory switching uniformity and retention performance are observed based on this simple tri-layer structure. All of these results demonstrate the great potential of this simple Pt/TaOx/n-Si tri-layer structure for access device-free high-density memory applications.The search for self-rectifying resistive memories has aroused great attention due to their potential in high-density memory applications without additional access devices. Here we report the forming-free and self-rectifying bipolar resistive switching behavior of a simple Pt/TaOx/n-Si tri-layer structure. The forming-free phenomenon is attributed to the generation of a large amount of oxygen vacancies, in a TaOx region that is in close proximity to the TaOx/n-Si interface, via out-diffusion of oxygen ions from TaOx to n-Si. A maximum rectification ratio of ~6 × 102 is obtained when the Pt/TaOx/n-Si devices stay in a low resistance state, which originates from the existence of a Schottky barrier between the formed oxygen vacancy filament and the n-Si electrode. More importantly, numerical simulation reveals that the self-rectifying behavior itself can guarantee a maximum crossbar size of 212 × 212 (~44 kbit) on the premise of 10% read margin. Moreover, satisfactory switching uniformity and retention performance are observed based on this simple tri-layer structure. All of these results demonstrate the great potential of this simple Pt/TaOx/n-Si tri-layer structure for access device-free high-density memory applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr06406b

  7. 12 CFR 615.5136 - Emergencies impeding normal access of Farm Credit banks to capital markets.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Investment Management § 615.5136 Emergencies impeding normal access of Farm Credit banks to capital markets..., adopt a resolution that: (a) Increases the amount of eligible investments that Farm Credit Banks, banks...

  8. 45 CFR 150.323 - Determining the amount of penalty-other matters as justice may require.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REQUIREMENTS RELATING TO HEALTH CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS CMS... Determining the amount of penalty—other matters as justice may require. CMS may take into account other...

  9. Connecting long-tail scientists with big data centers using SaaS

    NASA Astrophysics Data System (ADS)

    Percivall, G. S.; Bermudez, L. E.

    2012-12-01

    Big data centers and long tail scientists represent two extremes in the geoscience research community. Interoperability and inter-use based on software-as-a-service (SaaS) increases access to big data holdings by this underserved community of scientists. Large, institutional data centers have long been recognized as vital resources in the geoscience community. Permanent data archiving and dissemination centers provide "access to the data and (are) a critical source of people who have experience in the use of the data and can provide advice and counsel for new applications." [NRC] The "long-tail of science" is the geoscience researchers that work separate from institutional data centers [Heidorn]. Long-tail scientists need to be efficient consumers of data from large, institutional data centers. Discussions in NSF EarthCube capture the challenges: "Like the vast majority of NSF-funded researchers, Alice (a long-tail scientist) works with limited resources. In the absence of suitable expertise and infrastructure, the apparently simple task that she assigns to her graduate student becomes an information discovery and management nightmare. Downloading and transforming datasets takes weeks." [Foster, et.al.] The long-tail metaphor points to methods to bridge the gap, i.e., the Web. A decade ago, OGC began building a geospatial information space using open, web standards for geoprocessing [ORM]. Recently, [Foster, et.al.] accurately observed that "by adopting, adapting, and applying semantic web and SaaS technologies, we can make the use of geoscience data as easy and convenient as consumption of online media." SaaS places web services into Cloud Computing. SaaS for geospatial is emerging rapidly building on the first-generation geospatial web, e.g., OGC Web Coverage Service [WCS] and the Data Access Protocol [DAP]. Several recent examples show progress in applying SaaS to geosciences: - NASA's Earth Data Coherent Web has a goal to improve science user experience using Web Services (e.g. W*S, SOAP, RESTful) to reduce barriers to using EOSDIS data [ECW]. - NASA's LANCE provides direct access to vast amounts of satellite data using the OGC Web Map Tile Service (WMTS). - NOAA's Unified Access Framework for Gridded Data (UAF Grid) is a web service based capability for direct access to a variety of datasets using netCDF, OPeNDAP, THREDDS, WMS and WCS. [UAF] Tools to access SaaS's are many and varied: some proprietary, others open source; some run in browsers, others are stand-alone applications. What's required is interoperability using web interfaces offered by the data centers. NOAA's UAF service stack supports Matlab, ArcGIS, Ferret, GrADS, Google Earth, IDV, LAS. Any SaaS that offers OGC Web Services (WMS, WFS, WCS) can be accessed by scores of clients [OGC]. While there has been much progress in the recent year toward offering web services for the long-tail of scientists, more needs to be done. Web services offer data access but more than access is needed for inter-use of data, e.g. defining data schemas that allow for data fusion, addressing coordinate systems, spatial geometry, and semantics for observations. Connecting long-tail scientists with large, data centers using SaaS and, in the future, semantic web, will address this large and currently underserved user community.

  10. Novel Sources of Witchweed (Striga) Resistance from Wild Sorghum Accessions.

    PubMed

    Mbuvi, Dorothy A; Masiga, Clet W; Kuria, Eric; Masanga, Joel; Wamalwa, Mark; Mohamed, Abdallah; Odeny, Damaris A; Hamza, Nada; Timko, Michael P; Runo, Steven

    2017-01-01

    Sorghum is a major food staple in sub-Saharan Africa (SSA), but its production is constrained by the parasitic plant Striga that attaches to the roots of many cereals crops and causes severe stunting and loss of yield. Away from cultivated farmland, wild sorghum accessions grow as weedy plants and have shown remarkable immunity to Striga . We sought to determine the extent of the resistance to Striga in wild sorghum plants. Our screening strategy involved controlled laboratory assays of rhizotrons, where we artificially infected sorghum with Striga , as well as field experiments at three sites, where we grew sorghum with a natural Striga infestation. We tested the resistance response of seven accessions of wild sorghum of the aethiopicum, drummondii, and arundinaceum races against N13, which is a cultivated Striga resistant landrace. The susceptible control was farmer-preferred variety, Ochuti. From the laboratory experiments, we found three wild sorghum accessions (WSA-1, WSE-1, and WSA-2) that had significantly higher resistance than N13. These accessions had the lowest Striga biomass and the fewest and smallest Striga attached to them. Further microscopic and histological analysis of attached Striga haustorium showed that wild sorghum accessions hindered the ingression of Striga haustorium into the host endodermis. In one of the resistant accessions (WSE-1), host and parasite interaction led to the accumulation of large amounts of secondary metabolites that formed a dark coloration at the interphase. Field experiments confirmed the laboratory screening experiments in that these same accessions were found to have resistance against Striga . In the field, wild sorghum had low Area under the Striga Number Progressive curve (AUSNPC), which measures emergence of Striga from a host over time. We concluded that wild sorghum accessions are an important reservoir for Striga resistance that could be used to expand the genetic basis of cultivated sorghum for resistance to the parasite.

  11. Tools and Data Services from the NASA Earth Satellite Observations for Remote Sensing Commercial Applications

    NASA Technical Reports Server (NTRS)

    Vicente, Gilberto

    2005-01-01

    Several commercial applications of remote sensing data, such as water resources management, environmental monitoring, climate prediction, agriculture, forestry, preparation for and migration of extreme weather events, require access to vast amounts of archived high quality data, software tools and services for data manipulation and information extraction. These on the other hand require gaining detailed understanding of the data's internal structure and physical implementation of data reduction, combination and data product production. The time-consuming task must be undertaken before the core investigation can begin and is an especially difficult challenge when science objectives require users to deal with large multi-sensor data sets of different formats, structures, and resolutions.

  12. Evaluation of the Likelihood for Thermal Runaway for Nitrate Salt Containers in Storage at Los Alamos National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heatwole, Eric Mann; Gunderson, Jake Alfred; Parker, Gary Robert

    2016-03-25

    In order to handle and process the existing Los Alamos National Laboratory (LANL) Nitrate Salt drums it is necessary to quantify the risk. One of the most obvious dangers is a repeat of the original violent reaction (2015), which would endanger nearby workers, not only with radioactive contamination, but also with large amounts of heat, dangerous corrosive gases and the physical dangers associated with a bursting drum. If there still existed a high probability of violent reaction, then these drums should only be accessed remotely. The objective of the work reported herein is to determine the likelihood of a similarmore » violent event occurring.« less

  13. Security and Correctness Analysis on Privacy-Preserving k-Means Clustering Schemes

    NASA Astrophysics Data System (ADS)

    Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

    Due to the fast development of Internet and the related IT technologies, it becomes more and more easier to access a large amount of data. k-means clustering is a powerful and frequently used technique in data mining. Many research papers about privacy-preserving k-means clustering were published. In this paper, we analyze the existing privacy-preserving k-means clustering schemes based on the cryptographic techniques. We show those schemes will cause the privacy breach and cannot output the correct results due to the faults in the protocol construction. Furthermore, we analyze our proposal as an option to improve such problems but with intermediate information breach during the computation.

  14. Terabyte IDE RAID-5 Disk Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David A. Sanders et al.

    2003-09-30

    High energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. We examine some techniques that exploit recent developments in commodity hardware. We report on tests of redundant arrays of integrated drive electronics (IDE) disk drives for use in offline high energy physics data analysis. IDE redundant array of inexpensive disks (RAID) prices now are less than the cost per terabyte of million-dollar tape robots! The arrays can be scaled to sizes affordablemore » to institutions without robots and used when fast random access at low cost is important.« less

  15. Salicylaldehydes as privileged synthons in multicomponent reactions

    NASA Astrophysics Data System (ADS)

    Momahed Heravi, M.; Zadsirjan, V.; Mollaiye, M.; Heydari, M.; Taheri Kal Koshvandi, A.

    2018-06-01

    Salicylaldehyde (2-hydroxybenzaldehyde) bearing two different active functional groups, namely, a hydroxy group and an aldehyde group, finds wide application as a key chemical in a variety of industrial processes, especially in the large-scale production of pharmaceuticals. Salicylaldehyde and most of its derivatives are commercially available or readily accessible, and hence are ideal starting materials for multicomponent reactions (MCRs), mostly in pseudo-three and four-component ones, giving rise to a plethora of heterocyclic systems. The importance of salicylaldehyde and an impressive amount of studies concerning its applications in MCRs prompted us to highlight in this review the important role of this compound as a privileged synthon in the synthesis of heterocycles. The bibliography includes 276 references.

  16. Quality over Quantity: Contribution of Urban Green Space to Neighborhood Satisfaction

    PubMed Central

    Zhang, Yang; Van den Berg, Agnes E.; Van Dijk, Terry; Weitkamp, Gerd

    2017-01-01

    There is increasing evidence that the quality of green space significantly contributes to neighborhood satisfaction and well-being, independent of the mere amount of green space. In this paper, we examined residents’ perceptions of the quality and beneficial affordances of green space in relation to objectively assessed accessibility and usability. We used data from a survey in two neighborhoods (N = 223) of a medium-sized city in the Netherlands, which were similar in the amount of green space and other physical and socio-demographic characteristics, but differed in the availability of accessible and usable green spaces. Results show that residents of the neighborhood with a higher availability of accessible and usable green spaces were more satisfied with their neighborhood. This difference was statistically mediated by the higher level of perceived green space quality. Neighborhood satisfaction was significantly positively related to well-being. However, residents of the two neighborhoods did not differ in self-reported well-being and beneficial affordances of green space. These analyses contribute to a further understanding of how the accessibility and usability of green spaces may increase people’s neighborhood satisfaction. It highlights the importance of perceived quality in addition to the amount of green space when examining the beneficial effects of green space. PMID:28509879

  17. Quality over Quantity: Contribution of Urban Green Space to Neighborhood Satisfaction.

    PubMed

    Zhang, Yang; Van den Berg, Agnes E; Van Dijk, Terry; Weitkamp, Gerd

    2017-05-16

    There is increasing evidence that the quality of green space significantly contributes to neighborhood satisfaction and well-being, independent of the mere amount of green space. In this paper, we examined residents' perceptions of the quality and beneficial affordances of green space in relation to objectively assessed accessibility and usability. We used data from a survey in two neighborhoods ( N = 223) of a medium-sized city in the Netherlands, which were similar in the amount of green space and other physical and socio-demographic characteristics, but differed in the availability of accessible and usable green spaces. Results show that residents of the neighborhood with a higher availability of accessible and usable green spaces were more satisfied with their neighborhood. This difference was statistically mediated by the higher level of perceived green space quality. Neighborhood satisfaction was significantly positively related to well-being. However, residents of the two neighborhoods did not differ in self-reported well-being and beneficial affordances of green space. These analyses contribute to a further understanding of how the accessibility and usability of green spaces may increase people's neighborhood satisfaction. It highlights the importance of perceived quality in addition to the amount of green space when examining the beneficial effects of green space.

  18. Large Prefabricated Concrete Panels Collective Dwellings from the 1970s: Context and Improvements

    NASA Astrophysics Data System (ADS)

    Muntean, Daniel M.; Ungureanu, Viorel; Petran, Ioan; Georgescu, Mircea

    2017-10-01

    The period between 1960s and 1970s had a significant impact in Romania on the urban development of major cities. Because the vast expansion of the industry, the urban population has massively increased, due the large number of workers coming from the rural areas. This intense process has led to a shortage of homes on the housing market. In order to rapidly build new homes, standard residential project types were erected using large prefabricated concrete panels. By using repetitive patterns, such buildings were built in a short amount of time through the entire country. Nowadays, these buildings represent 1.8% of the built environment and accommodate more than half of a city’s population. Even though these units have reached only half their intended life span, they fail to satisfy present living standards and consume huge amounts of energy for heating, cooling, ventilation and lighting. Due to the fact that these building are based on standardised projects and were built in such a large scale, the creation of a system that brings them to current standards will not only benefit the building but also it will significantly improve the quality of life within. With the transition of the existing power grids to a “smart grid” such units can become micro power plants in future electricity networks thus contributing to micro-generation and energy storage. If one is to consider the EU 20-20-20 commitments, to find ideas for alternative and innovative strategies for further improving these building through locally adapted measures can be seen as one of the most addressed issues of today. This research offers a possible retrofitting scenario of these buildings towards a sustainable future. The building envelope is upgraded using a modular insulation system with integrated solar cells. Renewable energy systems for cooling and ventilation are integrated in order to provide flexibility of the indoor climate. Due to their small floor area, the space within the apartments is redesigned for a more efficient use of space and an improved natural lighting. Active core modules are placed on top of the unused attics and a solar panel array is introduced. Furthermore accessibility issues are addressed by facilitating access for disabled people and implementing an elevator system that currently these building do not have.

  19. 33 CFR 133.7 - Requests: Amount.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 133.7 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY TRUST FUND; STATE ACCESS § 133.7... amount anticipated for immediate removal action for a single oil pollution incident, but, in any event...

  20. 33 CFR 133.7 - Requests: Amount.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 133.7 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY TRUST FUND; STATE ACCESS § 133.7... amount anticipated for immediate removal action for a single oil pollution incident, but, in any event...

  1. 33 CFR 133.7 - Requests: Amount.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 133.7 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY TRUST FUND; STATE ACCESS § 133.7... amount anticipated for immediate removal action for a single oil pollution incident, but, in any event...

  2. 33 CFR 133.7 - Requests: Amount.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 133.7 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY TRUST FUND; STATE ACCESS § 133.7... amount anticipated for immediate removal action for a single oil pollution incident, but, in any event...

  3. 33 CFR 133.7 - Requests: Amount.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 133.7 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY TRUST FUND; STATE ACCESS § 133.7... amount anticipated for immediate removal action for a single oil pollution incident, but, in any event...

  4. The ESGF Software Stack: a Configurable and Extensible Framework for Enabling Access to Geospatial Data

    NASA Astrophysics Data System (ADS)

    Cinquini, L.; Bell, G. M.; Williams, D.; Harney, J.

    2012-12-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing state-of-the-art services for the management and access of Earth system data. ESGF is currently used to serve the totality of the model output used for the forthcoming IPCC 5th assessment report on climate change, as well as supporting observational and reanalysis datasets. Also, it is been adopted by several other projects that focus on global, regional and local climate modeling. The ESGF software stack is composed of several modular applications that cover related but disjoint areas of functionality: data publishing, data search and discovery, data access, user management, security, and federation. Overall, the ESGF infrastructure offers a configurable end-to-end solution to the problem of enabling web-based access to large amounts of geospatial data. This talk will present the architectural and configuration options that are available to a data provider leveraging ESGF to serve their data: which services to expose, how to scale to larger data collections, how to establish access control, how to customize the user interface, and others. Additionally, the framework provides extension points that allow each site to plug in custom functionality such as crawling of specific metadata repositories, exposing domain-specific analysis and visualization services, developing custom access clients that interact with the system APIs. These configuration and extension capabilities are based on simple but effective domain-specific object models, that underpin the software applications: the data model, the security model, and the federation model. The ESGF software stack is developed collaboratively by software engineers at many institutions around the world, and is made freely available to the community under an open source license to promote adoption, reuse, inspection and continuous improvement.

  5. Risk of subacute ruminal acidosis in sheep with separate access to forage and concentrate.

    PubMed

    Commun, L; Mialon, M M; Martin, C; Baumont, R; Veissier, I

    2009-10-01

    This study aimed to investigate whether sheep offered free-choice intake of forage and concentrate develop subacute ruminal acidosis (SARA) and to identify SARA-associated feeding behavior components. In a crossover design over two 28-d periods, 11 rumen-cannulated wethers received wheat and alfalfa hay in 2 separate compartments. Concentrate and forage were provided for ad libitum access or in a fixed amount corresponding to 80% of ad libitum hay intake with a concentrate:forage ratio of 60:40 on a DM basis. In both diets, sheep were fed 2 equal portions at 0800 and 1600 h. Ruminal pH, voluntary intake, and feeding behavior were recorded continuously from d 1 to 9 and d 15 to 23 in each period. When no measurements were performed, the animals were housed in larger pens with straw bedding. When fed for ad libitum intake, the sheep ingested 1,340 g of DM/d consisting of 49.1% wheat, whereas with the fixed diet they ate 872 g of DM/d consisting of 58.4% wheat. Sheep fed for ad libitum intake spent more time with ruminal pH < 5.6 than when fed in fixed amounts (7.77 vs. 3.05 h/d, P < 0.001). The time spent with ruminal pH <5.6 was mainly linked to the amount of feed ingested and especially the amount of wheat (P < 0.001). Our results suggest that when fed for ad libitum intake with free choice wheat, the achieved concentrate:forage ratio of near 50:50 and a larger hay intake enable sheep to consume more wheat. When sheep were fed for ad libitum intake, feeding bouts were spread evenly throughout the day. Although ruminal pH reached the same minimum level in both diets after main meals, time to reach pH nadir was longer with ad libitum diet (P < 0.001). In addition, after reaching this minimum value, ruminal pH increased more slowly in this diet, inducing a decreased preprandial ruminal pH (P < 0.001). Consequently, the ad libitum diet led to a longer time below pH 5.6. A slow decrease in ruminal pH may enable sheep to consume larger quantities of food. However, free access to concentrate maintains continuously elevated content of ruminal fermentation end products and so requires more time for pH to return to neutral values. Thus, interval between feed distributions should be as large as possible to help resume the preprandial ruminal pH and to limit time spent with pH <5.6.

  6. High-Performance Secure Database Access Technologies for HEP Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less

  7. A Phylogenomic Approach Based on PCR Target Enrichment and High Throughput Sequencing: Resolving the Diversity within the South American Species of Bartsia L. (Orobanchaceae)

    PubMed Central

    Tank, David C.

    2016-01-01

    Advances in high-throughput sequencing (HTS) have allowed researchers to obtain large amounts of biological sequence information at speeds and costs unimaginable only a decade ago. Phylogenetics, and the study of evolution in general, is quickly migrating towards using HTS to generate larger and more complex molecular datasets. In this paper, we present a method that utilizes microfluidic PCR and HTS to generate large amounts of sequence data suitable for phylogenetic analyses. The approach uses the Fluidigm Access Array System (Fluidigm, San Francisco, CA, USA) and two sets of PCR primers to simultaneously amplify 48 target regions across 48 samples, incorporating sample-specific barcodes and HTS adapters (2,304 unique amplicons per Access Array). The final product is a pooled set of amplicons ready to be sequenced, and thus, there is no need to construct separate, costly genomic libraries for each sample. Further, we present a bioinformatics pipeline to process the raw HTS reads to either generate consensus sequences (with or without ambiguities) for every locus in every sample or—more importantly—recover the separate alleles from heterozygous target regions in each sample. This is important because it adds allelic information that is well suited for coalescent-based phylogenetic analyses that are becoming very common in conservation and evolutionary biology. To test our approach and bioinformatics pipeline, we sequenced 576 samples across 96 target regions belonging to the South American clade of the genus Bartsia L. in the plant family Orobanchaceae. After sequencing cleanup and alignment, the experiment resulted in ~25,300bp across 486 samples for a set of 48 primer pairs targeting the plastome, and ~13,500bp for 363 samples for a set of primers targeting regions in the nuclear genome. Finally, we constructed a combined concatenated matrix from all 96 primer combinations, resulting in a combined aligned length of ~40,500bp for 349 samples. PMID:26828929

  8. The global Cretaceous-Tertiary fire: Biomass or fossil carbon

    NASA Technical Reports Server (NTRS)

    Gilmour, Iain; Guenther, Frank

    1988-01-01

    The global soot layer at the K-T boundary indicates a major fire triggered by meteorite impact. However, it is not clear whether the principal fuel was biomass or fossil carbon. Forests are favored by delta value of C-13, which is close to the average for trees, but the total amount of elemental C is approximately 10 percent of the present living carbon, and thus requires very efficient conversion to soot. The PAH was analyzed at Woodside Creek, in the hope of finding a diagnostic molecular marker. A promising candidate is 1-methyl-7-isopropyl phenanthrene (retene,), which is probably derived by low temperature degradation of abietic acid. Unlike other PAH that form by pyrosynthesis at higher temperatures, retene has retained the characteristic side chains of its parent molecule. A total of 11 PAH compounds were identified in the boundary clay. Retene is present in substantial abundance. The identification was confirmed by analysis of a retene standard. Retene is characteristic of the combustion of resinous higher plants. Its formation depends on both temperature and oxygen access, and is apparently highest in oxygen-poor fires. Such fires would also produce soot more efficiently which may explain the high soot abundance. The relatively high level of coronene is not typical of a wood combustion source, however, though it can be produced during high temperature pyrolysis of methane, and presumably other H, C-containing materials. This would require large, hot, low O2 zones, which may occur only in very large fires. The presence of retene indicates that biomass was a significant fuel source for the soot at the Cretaceous-Tertiary boundary. The total amount of elemental C produced requires a greater than 3 percent soot yield, which is higher than typically observed for wildfires. However, retene and presumably coronene imply limited access of O2 and hence high soot yield.

  9. Exploring Possibilities for Transforming Established Subscription-based Scientific Journals into Open Access Journals. Present Situation, Transformation Criteria, and Exemplary Implementation within Trans-O-MIM.

    PubMed

    Haux, Reinhold; Kuballa, Stefanie; Schulze, Mareike; Böhm, Claudia; Gefeller, Olaf; Haaf, Jan; Henning, Peter; Mielke, Corinna; Niggemann, Florian; Schürg, Andrea; Bergemann, Dieter

    2016-12-07

    Based on today's information and communication technologies the open access paradigm has become an important approach for adequately communicating new scientific knowledge. Summarizing the present situation for journal transformation. Presenting criteria for adequate transformation as well as a specific approach for it. Describing our exemplary implementation of such a journal transformation. Studying the respective literature as well as discussing this topic in various discussion groups and meetings (primarily of editors and publishers, but also of authors and readers), with long term experience as editors and /or publishers of scientific publications as prerequisite. There is a clear will, particularly of political and funding organizations, towards open access publishing. In spite of this, there is still a large amount of scientific knowledge, being communicated through subscription-based journals. For successfully transforming such journals into open access, sixteen criteria for a goal-oriented, stepwise, sustainable, and fair transformation are suggested. The Tandem Model as transformation approach is introduced. Our exemplary implementation is done in the Trans-O-MIM project. It is exploring strategies, models and evaluation metrics for journal transformation. As instance the journal Methods of Information in Medicine will apply the Tandem Model from 2017 onwards. Within Trans-O-MIM we will reach at least nine of the sixteen criteria for adequate transformation. It was positive to implement Trans-O-MIM as international research project. After first steps for transforming Methods have successfully been made, challenges will remain, among others, in identifying appropriate incentives for open access publishing in order to support its transformation.

  10. Does the Perception that Stress Affects Health Matter? The Association with Health and Mortality

    PubMed Central

    Keller, Abiola; Litzelman, Kristin; Wisk, Lauren E.; Maddox, Torsheika; Cheng, Erika Rose; Creswell, Paul D.; Witt, Whitney P.

    2012-01-01

    Objective This study sought to examine the relationship among the amount of stress, the perception that stress affects health, and health and mortality outcomes in a nationally-representative sample of U.S. adults. Methods Data from the 1998 National Health Interview Survey were linked to prospective National Death Index mortality data through 2006. Separate logistic regression models were used to examine the factors associated with current health status and psychological distress. Cox proportional hazard models were used to determine the impact of perceiving that stress affects health on all-cause mortality. Each model specifically examined the interaction between the amount of stress and the perception that stress affects health, controlling for sociodemographic, health behavior, and access to healthcare factors. Results 33.7% of nearly 186 million (n=28,753) U.S. adults perceived that stress affected their health a lot or to some extent. Both higher levels of reported stress and the perception that stress affects health were independently associated with an increased likelihood of worse health and mental health outcomes. The amount of stress and the perception that stress affects health interacted such that those who reported a lot of stress and that stress impacted their health a lot had a 43% increased risk of premature death (HR = 1.43, 95% CI [1.20, 1.71]). Conclusions High amounts of stress and the perception that stress impacts health are each associated with poor health and mental health. Individuals who perceived that stress affects their health and reported a large amount of stress had an increased risk of premature death. PMID:22201278

  11. Power System Implementation and Demonstration at Camp Katuu, Palau

    DTIC Science & Technology

    2011-05-11

    Horizontal Rows 3 Vertical Rows 3 Vertical Rows with Center Walkway 6 Vertical Rows 1 Amount of rail mounting (lf) 1440’ 1800’ 1800’ 1440’ 2 Ease of rail...installation some rail cutting required to clear walkway requires two level rail mounting system requires two level rail mounting system no rail...Maintenance access 21" horizontal & vertical walkway , does not have direct access to all panels Accessible with 15" walkways Direct access to each panel and

  12. Buried and accessible surface area control intrinsic protein flexibility.

    PubMed

    Marsh, Joseph A

    2013-09-09

    Proteins experience a wide variety of conformational dynamics that can be crucial for facilitating their diverse functions. How is the intrinsic flexibility required for these motions encoded in their three-dimensional structures? Here, the overall flexibility of a protein is demonstrated to be tightly coupled to the total amount of surface area buried within its fold. A simple proxy for this, the relative solvent-accessible surface area (Arel), therefore shows excellent agreement with independent measures of global protein flexibility derived from various experimental and computational methods. Application of Arel on a large scale demonstrates its utility by revealing unique sequence and structural properties associated with intrinsic flexibility. In particular, flexibility as measured by Arel shows little correspondence with intrinsic disorder, but instead tends to be associated with multiple domains and increased α-helical structure. Furthermore, the apparent flexibility of monomeric proteins is found to be useful for identifying quaternary-structure errors in published crystal structures. There is also a strong tendency for the crystal structures of more flexible proteins to be solved to lower resolutions. Finally, local solvent accessibility is shown to be a primary determinant of local residue flexibility. Overall, this work provides both fundamental mechanistic insight into the origin of protein flexibility and a simple, practical method for predicting flexibility from protein structures. © 2013 Elsevier Ltd. All rights reserved.

  13. Twisted photon entanglement through turbulent air across Vienna

    PubMed Central

    Krenn, Mario; Handsteiner, Johannes; Fink, Matthias; Fickler, Robert; Zeilinger, Anton

    2015-01-01

    Photons with a twisted phase front can carry a discrete, in principle, unbounded amount of orbital angular momentum (OAM). The large state space allows for complex types of entanglement, interesting both for quantum communication and for fundamental tests of quantum theory. However, the distribution of such entangled states over large distances was thought to be infeasible due to influence of atmospheric turbulence, indicating a serious limitation on their usefulness. Here we show that it is possible to distribute quantum entanglement encoded in OAM over a turbulent intracity link of 3 km. We confirm quantum entanglement of the first two higher-order levels (with OAM=± 1ℏ and ± 2ℏ). They correspond to four additional quantum channels orthogonal to all that have been used in long-distance quantum experiments so far. Therefore, a promising application would be quantum communication with a large alphabet. We also demonstrate that our link allows access to up to 11 quantum channels of OAM. The restrictive factors toward higher numbers are technical limitations that can be circumvented with readily available technologies. PMID:26578763

  14. Twisted photon entanglement through turbulent air across Vienna.

    PubMed

    Krenn, Mario; Handsteiner, Johannes; Fink, Matthias; Fickler, Robert; Zeilinger, Anton

    2015-11-17

    Photons with a twisted phase front can carry a discrete, in principle, unbounded amount of orbital angular momentum (OAM). The large state space allows for complex types of entanglement, interesting both for quantum communication and for fundamental tests of quantum theory. However, the distribution of such entangled states over large distances was thought to be infeasible due to influence of atmospheric turbulence, indicating a serious limitation on their usefulness. Here we show that it is possible to distribute quantum entanglement encoded in OAM over a turbulent intracity link of 3 km. We confirm quantum entanglement of the first two higher-order levels (with OAM=± 1ħ and ± 2ħ). They correspond to four additional quantum channels orthogonal to all that have been used in long-distance quantum experiments so far. Therefore, a promising application would be quantum communication with a large alphabet. We also demonstrate that our link allows access to up to 11 quantum channels of OAM. The restrictive factors toward higher numbers are technical limitations that can be circumvented with readily available technologies.

  15. Graphene-Based Ultra-Light Batteries for Aircraft

    NASA Technical Reports Server (NTRS)

    Calle, Carlos I.; Kaner, Richard B.

    2014-01-01

    Develop a graphene-based ultracapacitor prototype that is flexible, thin, lightweight, durable, low cost, and safe and that will demonstrate the feasibility for use in aircraft center dot These graphene-based devices store charge on graphene sheets and take advantage of the large accessible surface area of graphene (2,600 m2/g) to increase the electrical energy that can be stored. center dot The proposed devices should have the electrical storage capacity of thin-film-ion batteries but with much shorter charge/discharge cycle times as well as longer lives center dot The proposed devices will be carbon-based and so will not have the same issues with flammability or toxicity as the standard lithium-based storage cells There are two main established methods for the storage and delivery of electrical energy: center dot Batteries - Store energy with electrochemical reactions - High energy densities - Slow charge/discharge cycles - Used in applications requiring large amounts of energy ? aircraft center dot Electrochemical capacitors - Store energy in electrochemical double layers - Fast charge/discharge cycles - Low energy densities - Used in electronics devices - Large capacitors are used in truck engine cranking

  16. A peer-to-peer music sharing system based on query-by-humming

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Chang, Xinglong; Zhao, Zheng; Zhang, Yebin; Shi, Qingwei

    2007-09-01

    Today, the main traffic in peer-to-peer (P2P) network is still multimedia files including large numbers of music files. The study of Music Information Retrieval (MIR) brings out many encouraging achievements in music search area. Nevertheless, the research of music search based on MIR in P2P network is still insufficient. Query by Humming (QBH) is one MIR technology studied for years. In this paper, we present a server based P2P music sharing system which is based on QBH and integrated with a Hierarchical Index Structure (HIS) to enhance the relation between surface data and potential information. HIS automatically evolving depends on the music related items carried by each peer such as midi files, lyrics and so forth. Instead of adding large amount of redundancy, the system generates a bit of index for multiple search input which improves the traditional keyword-based text search mode largely. When network bandwidth, speed, etc. are no longer a bottleneck of internet serve, the accessibility and accuracy of information provided by internet are being more concerned by end users.

  17. Occupational cancer in the European part of the Commonwealth of Independent States.

    PubMed Central

    Bulbulyan, M A; Boffetta, P

    1999-01-01

    Precise information on the number of workers currently exposed to carcinogens in the Commonwealth of Independent States (CIS) is lacking. However, the large number of workers employed in high-risk industries such as the chemical and metal industries suggests that the number of workers potentially exposed to carcinogens may be large. In the CIS, women account for almost 50% of the industrial work force. Although no precise data are available on the number of cancers caused by occupational exposures, indirect evidence suggests that the magnitude of the problem is comparable to that observed in Western Europe, representing some 20,000 cases per year. The large number of women employed in the past and at present in industries that create potential exposure to carcinogens is a special characteristic of the CIS. In recent years an increasing amount of high-quality research has been conducted on occupational cancer in the CIS; there is, however, room for further improvement. International training programs should be established, and funds from international research and development programs should be devoted to this area. In recent years, following privatization of many large-scale industries, access to employment and exposure data is becoming increasingly difficult. PMID:10350512

  18. Short- and Long-term Exposure to Low and High Dose Running Produce Differential Effects on Hippocampal Neurogenesis.

    PubMed

    Nguemeni, Carine; McDonald, Matthew W; Jeffers, Matthew S; Livingston-Thomas, Jessica; Lagace, Diane; Corbett, Dale

    2018-01-15

    Continuous running wheel (RW) exercise increases adult hippocampal neurogenesis in the dentate gyrus (DG) of rodents. Evidence suggests that greater amounts of RW exercise does not always equate to more adult-generated neurons in hippocampus. It can also be argued that continuous access to a RW results in exercise levels not representative of human exercise patterns. This study tested if RW paradigms that more closely represent human exercise patterns (e.g. shorter bouts, alternating daily exercise) alter neurogenesis. Neurogenesis was measured by examining the survival and fate of bromodeoxyuridine (BrdU)-labeled proliferating cells in the DG of male Sprague-Dawley rats after acute (14 days) or chronic (30 days) RW access. Rats were assigned to experimental groups based on the number of hours that they had access to a RW over two days: 0 h, 4 h, 8 h, 24 h, and 48 h. After acute RW access, rats that had unlimited access to the RW on alternating days (24 h) had a stronger neurogenic response compared to those rats that ran modest distances (4 h, 8 h) or not at all (0 h). In contrast, following chronic RW access, rats that ran a moderate amount (4 h, 8 h) had significantly more surviving cells compared to 0 h, 24 h, and 48 h. Linear regression analysis established a negative relationship between running distance and surviving BrdU+ cells in the chronic RW access cohort (R 2  = 0.40). These data demonstrate that in rats moderate amounts of RW exercise are superior to continuous daily RW exercise paradigms at promoting hippocampal neurogenesis in the long-term. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. DREAM: Distributed Resources for the Earth System Grid Federation (ESGF) Advanced Management

    NASA Astrophysics Data System (ADS)

    Williams, D. N.

    2015-12-01

    The data associated with climate research is often generated, accessed, stored, and analyzed on a mix of unique platforms. The volume, variety, velocity, and veracity of this data creates unique challenges as climate research attempts to move beyond stand-alone platforms to a system that truly integrates dispersed resources. Today, sharing data across multiple facilities is often a challenge due to the large variance in supporting infrastructures. This results in data being accessed and downloaded many times, which requires significant amounts of resources, places a heavy analytic development burden on the end users, and mismanaged resources. Working across U.S. federal agencies, international agencies, and multiple worldwide data centers, and spanning seven international network organizations, the Earth System Grid Federation (ESGF) has begun to solve this problem. Its architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces. However, significant challenges remain, including workflow provenance, modular and flexible deployment, scalability of a diverse set of computational resources, and more. Expanding on the existing ESGF, the Distributed Resources for the Earth System Grid Federation Advanced Management (DREAM) will ensure that the access, storage, movement, and analysis of the large quantities of data that are processed and produced by diverse science projects can be dynamically distributed with proper resource management. This system will enable data from an infinite number of diverse sources to be organized and accessed from anywhere on any device (including mobile platforms). The approach offers a powerful roadmap for the creation and integration of a unified knowledge base of an entire ecosystem, including its many geophysical, geographical, social, political, agricultural, energy, transportation, and cyber aspects. The resulting aggregation of data combined with analytics services has the potential to generate an informational universe and knowledge system of unprecedented size and value to the scientific community, downstream applications, decision makers, and the public.

  20. Effect of level of autonomy on the amount of physical activity in young children

    USDA-ARS?s Scientific Manuscript database

    BACKGROUND: Emerging research has indicated that providing choice of exercise options increases the amount of physical activity children perform. However, these studies have not yet assessed this effect using physical activities children typically have access to in a naturalistic setting. PURPOSE...

  1. 45 CFR 149.100 - Amount of reimbursement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS... reimbursement in the amount of 80 percent of the costs for health benefits (net of negotiated price concessions for health benefits) for claims incurred during the plan year that are attributed to health benefits...

  2. 45 CFR 150.321 - Determining the amount of penalty-aggravating circumstances.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... RELATING TO HEALTH CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS CMS Enforcement..., if there are substantial or several aggravating circumstances, CMS sets the aggregate amount of the.... CMS considers the following circumstances to be aggravating circumstances: (a) The frequency of...

  3. Database for High Throughput Screening Hits (dHITS): a simple tool to retrieve gene specific phenotypes from systematic screens done in yeast.

    PubMed

    Chuartzman, Silvia G; Schuldiner, Maya

    2018-03-25

    In the last decade several collections of Saccharomyces cerevisiae yeast strains have been created. In these collections every gene is modified in a similar manner such as by a deletion or the addition of a protein tag. Such libraries have enabled a diversity of systematic screens, giving rise to large amounts of information regarding gene functions. However, often papers describing such screens focus on a single gene or a small set of genes and all other loci affecting the phenotype of choice ('hits') are only mentioned in tables that are provided as supplementary material and are often hard to retrieve or search. To help unify and make such data accessible, we have created a Database of High Throughput Screening Hits (dHITS). The dHITS database enables information to be obtained about screens in which genes of interest were found as well as the other genes that came up in that screen - all in a readily accessible and downloadable format. The ability to query large lists of genes at the same time provides a platform to easily analyse hits obtained from transcriptional analyses or other screens. We hope that this platform will serve as a tool to facilitate investigation of protein functions to the yeast community. © 2018 The Authors Yeast Published by John Wiley & Sons Ltd.

  4. The Encyclopedia of Life v2: Providing Global Access to Knowledge About Life on Earth

    PubMed Central

    2014-01-01

    Abstract The Encyclopedia of Life (EOL, http://eol.org) aims to provide unprecedented global access to a broad range of information about life on Earth. It currently contains 3.5 million distinct pages for taxa and provides content for 1.3 million of those pages. The content is primarily contributed by EOL content partners (providers) that have a more limited geographic, taxonomic or topical scope. EOL aggregates these data and automatically integrates them based on associated scientific names and other classification information. EOL also provides interfaces for curation and direct content addition. All materials in EOL are either in the public domain or licensed under a Creative Commons license. In addition to the web interface, EOL is also accessible through an Application Programming Interface. In this paper, we review recent developments added for Version 2 of the web site and subsequent releases through Version 2.2, which have made EOL more engaging, personal, accessible and internationalizable. We outline the core features and technical architecture of the system. We summarize milestones achieved so far by EOL to present results of the current system implementation and establish benchmarks upon which to judge future improvements. We have shown that it is possible to successfully integrate large amounts of descriptive biodiversity data from diverse sources into a robust, standards-based, dynamic, and scalable infrastructure. Increasing global participation and the emergence of EOL-powered applications demonstrate that EOL is becoming a significant resource for anyone interested in biological diversity. PMID:24891832

  5. Access control and confidentiality in radiology

    NASA Astrophysics Data System (ADS)

    Noumeir, Rita; Chafik, Adil

    2005-04-01

    A medical record contains a large amount of data about the patient such as height, weight and blood pressure. It also contains sensitive information such as fertility, abortion, psychiatric data, sexually transmitted diseases and diagnostic results. Access to this information must be carefully controlled. Information technology has greatly improved patient care. The recent extensive deployment of digital medical images made diagnostic images promptly available to healthcare decision makers, regardless of their geographic location. Medical images are digitally archived, transferred on telecommunication networks, and visualized on computer screens. However, with the widespread use of computing and communication technologies in healthcare, the issue of data security has become increasingly important. Most of the work until now has focused on the security of data communication to ensure its integrity, authentication, confidentiality and user accountability. The mechanisms that have been proposed to achieve the security of data communication are not specific to healthcare. Data integrity can be achieved with data signature. Data authentication can be achieved with certificate exchange. Data confidentiality can be achieved with encryption. User accountability can be achieved with audits. Although these mechanisms are essential to ensure data security during its transfer on the network, access control is needed in order to ensure data confidentiality and privacy within the information system application. In this paper, we present and discuss an access control mechanism that takes into account the notion of a care process. Radiology information is categorized and a model to enforce data privacy is proposed.

  6. Homoplasy and mutation model at microsatellite loci and their consequences for population genetics analysis.

    PubMed

    Estoup, Arnaud; Jarne, Philippe; Cornuet, Jean-Marie

    2002-09-01

    Homoplasy has recently attracted the attention of population geneticists, as a consequence of the popularity of highly variable stepwise mutating markers such as microsatellites. Microsatellite alleles generally refer to DNA fragments of different size (electromorphs). Electromorphs are identical in state (i.e. have identical size), but are not necessarily identical by descent due to convergent mutation(s). Homoplasy occurring at microsatellites is thus referred to as size homoplasy. Using new analytical developments and computer simulations, we first evaluate the effect of the mutation rate, the mutation model, the effective population size and the time of divergence between populations on size homoplasy at the within and between population levels. We then review the few experimental studies that used various molecular techniques to detect size homoplasious events at some microsatellite loci. The relationship between this molecularly accessible size homoplasy size and the actual amount of size homoplasy is not trivial, the former being considerably influenced by the molecular structure of microsatellite core sequences. In a third section, we show that homoplasy at microsatellite electromorphs does not represent a significant problem for many types of population genetics analyses realized by molecular ecologists, the large amount of variability at microsatellite loci often compensating for their homoplasious evolution. The situations where size homoplasy may be more problematic involve high mutation rates and large population sizes together with strong allele size constraints.

  7. Minimal resin embedding of multicellular specimens for targeted FIB-SEM imaging.

    PubMed

    Schieber, Nicole L; Machado, Pedro; Markert, Sebastian M; Stigloher, Christian; Schwab, Yannick; Steyer, Anna M

    2017-01-01

    Correlative light and electron microscopy (CLEM) is a powerful tool to perform ultrastructural analysis of targeted tissues or cells. The large field of view of the light microscope (LM) enables quick and efficient surveys of the whole specimen. It is also compatible with live imaging, giving access to functional assays. CLEM protocols take advantage of the features to efficiently retrace the position of targeted sites when switching from one modality to the other. They more often rely on anatomical cues that are visible both by light and electron microscopy. We present here a simple workflow where multicellular specimens are embedded in minimal amounts of resin, exposing their surface topology that can be imaged by scanning electron microscopy (SEM). LM and SEM both benefit from a large field of view that can cover whole model organisms. As a result, targeting specific anatomic locations by focused ion beam-SEM (FIB-SEM) tomography becomes straightforward. We illustrate this application on three different model organisms, used in our laboratory: the zebrafish embryo Danio rerio, the marine worm Platynereis dumerilii, and the dauer larva of the nematode Caenorhabditis elegans. Here we focus on the experimental steps to reduce the amount of resin covering the samples and to image the specimens inside an FIB-SEM. We expect this approach to have widespread applications for volume electron microscopy on multiple model organisms. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. CMS Connect

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  9. Healthcare and the Roles of the Medical Profession in the Big Data Era*1

    PubMed Central

    YAMAMOTO, Yuji

    2016-01-01

    The accumulation of large amounts of healthcare information is in progress, and society is about to enter the Health Big Data era by linking such data. Medical professionals’ daily tasks in clinical practice have become more complicated due to information overload, accelerated technological development, and the expansion of conceptual frameworks for medical care. Further, their responsibilities are more challenging and their workload is consistently increasing. As medical professionals enter the Health Big Data era, we need to reevaluate the fundamental significance and role of medicine and investigate ways to utilize this available information and technology. For example, a data analysis on diabetes patients has already shed light on the status of accessibility to physicians and the treatment response rate. In time, large amounts of health data will help find solutions including new effective treatment that could not be discovered by conventional means. Despite the vastness of accumulated data and analyses, their interpretation is necessarily conducted by attending physicians who communicate these findings to patients face to face; this task cannot be replaced by technology. As medical professionals, we must take the initiative to evaluate the framework of medicine in the Health Big Data era, study the ideal approach for clinical practitioners within this framework, and spread awareness to the public about our framework and approach while implementing them. PMID:28299246

  10. A virtual reality environment for telescope operation

    NASA Astrophysics Data System (ADS)

    Martínez, Luis A.; Villarreal, José L.; Ángeles, Fernando; Bernal, Abel

    2010-07-01

    Astronomical observatories and telescopes are becoming increasingly large and complex systems, demanding to any potential user the acquirement of great amount of information previous to access them. At present, the most common way to overcome that information is through the implementation of larger graphical user interfaces and computer monitors to increase the display area. Tonantzintla Observatory has a 1-m telescope with a remote observing system. As a step forward in the improvement of the telescope software, we have designed a Virtual Reality (VR) environment that works as an extension of the remote system and allows us to operate the telescope. In this work we explore this alternative technology that is being suggested here as a software platform for the operation of the 1-m telescope.

  11. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  12. Carbon dioxide: Global warning for nephrologists

    PubMed Central

    Marano, Marco; D’Amato, Anna; Cantone, Alessandra

    2016-01-01

    The large prevalence of respiratory acid-base disorders overlapping metabolic acidosis in hemodialysis population should prompt nephrologists to deal with the partial pressure of carbon dioxide (pCO2) complying with the reduced bicarbonate concentration. What the most suitable formula to compute pCO2 is reviewed. Then, the neglected issue of CO2 content in the dialysis fluid is under the spotlight. In fact, a considerable amount of CO2 comes to patients’ bloodstream every hemodialysis treatment and “acidosis by dialysate” may occur if lungs do not properly clear away this burden of CO2. Moreover, vascular access recirculation may be easy diagnosed by detecting CO2 in the arterial line of extracorporeal circuit if CO2-enriched blood from the filter reenters arterial needle. PMID:27648406

  13. Analysis of Land Subsidence Monitoring in Mining Area with Time-Series Insar Technology

    NASA Astrophysics Data System (ADS)

    Sun, N.; Wang, Y. J.

    2018-04-01

    Time-series InSAR technology has become a popular land subsidence monitoring method in recent years, because of its advantages such as high accuracy, wide area, low expenditure, intensive monitoring points and free from accessibility restrictions. In this paper, we applied two kinds of satellite data, ALOS PALSAR and RADARSAT-2, to get the subsidence monitoring results of the study area in two time periods by time-series InSAR technology. By analyzing the deformation range, rate and amount, the time-series analysis of land subsidence in mining area was realized. The results show that InSAR technology could be used to monitor land subsidence in large area and meet the demand of subsidence monitoring in mining area.

  14. Adaptation of XMM-Newton SAS to GRID and VO architectures via web

    NASA Astrophysics Data System (ADS)

    Ibarra, A.; de La Calle, I.; Gabriel, C.; Salgado, J.; Osuna, P.

    2008-10-01

    The XMM-Newton Scientific Analysis Software (SAS) is a robust software that has allowed users to produce good scientific results since the beginning of the mission. This has been possible given the SAS capability to evolve with the advent of new technologies and adapt to the needs of the scientific community. The prototype of the Remote Interface for Science Analysis (RISA) presented here, is one such example, which provides remote analysis of XMM-Newton data with access to all the existing SAS functionality, while making use of GRID computing technology. This new technology has recently emerged within the astrophysical community to tackle the ever lasting problem of computer power for the reduction of large amounts of data.

  15. Community evolution mining and analysis in social network

    NASA Astrophysics Data System (ADS)

    Liu, Hongtao; Tian, Yuan; Liu, Xueyan; Jian, Jie

    2017-03-01

    With the development of digital and network technology, various social platforms emerge. These social platforms have greatly facilitated access to information, attracting more and more users. They use these social platforms every day to work, study and communicate, so every moment social platforms are generating massive amounts of data. These data can often be modeled as complex networks, making large-scale social network analysis possible. In this paper, the existing evolution classification model of community has been improved based on community evolution relationship over time in dynamic social network, and the Evolution-Tree structure is proposed which can show the whole life cycle of the community more clearly. The comparative test result shows that the improved model can excavate the evolution relationship of the community well.

  16. Carbon dioxide: Global warning for nephrologists.

    PubMed

    Marano, Marco; D'Amato, Anna; Cantone, Alessandra

    2016-09-06

    The large prevalence of respiratory acid-base disorders overlapping metabolic acidosis in hemodialysis population should prompt nephrologists to deal with the partial pressure of carbon dioxide (pCO2) complying with the reduced bicarbonate concentration. What the most suitable formula to compute pCO2 is reviewed. Then, the neglected issue of CO2 content in the dialysis fluid is under the spotlight. In fact, a considerable amount of CO2 comes to patients' bloodstream every hemodialysis treatment and "acidosis by dialysate" may occur if lungs do not properly clear away this burden of CO2. Moreover, vascular access recirculation may be easy diagnosed by detecting CO2 in the arterial line of extracorporeal circuit if CO2-enriched blood from the filter reenters arterial needle.

  17. Natural Allelic Variations in Highly Polyploidy Saccharum Complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jian; Yang, Xiping; Resende, Jr., Marcio F. R.

    Sugarcane ( Saccharum spp.) is an important sugar and biofuel crop with high polyploid and complex genomes. The Saccharum complex, comprised of Saccharum genus and a few related genera, are important genetic resources for sugarcane breeding. A large amount of natural variation exists within the Saccharum complex. Though understanding their allelic variation has been challenging, it is critical to dissect allelic structure and to identify the alleles controlling important traits in sugarcane. To characterize natural variations in Saccharum complex, a target enrichment sequencing approach was used to assay 12 representative germplasm accessions. In total, 55,946 highly efficient probes were designedmore » based on the sorghum genome and sugarcane unigene set targeting a total of 6 Mb of the sugarcane genome. A pipeline specifically tailored for polyploid sequence variants and genotype calling was established. BWAmem and sorghum genome approved to be an acceptable aligner and reference for sugarcane target enrichment sequence analysis, respectively. Genetic variations including 1,166,066 non-redundant SNPs, 150,421 InDels, 919 gene copy number variations, and 1,257 gene presence/absence variations were detected. SNPs from three different callers (Samtools, Freebayes, and GATK) were compared and the validation rates were nearly 90%. Based on the SNP loci of each accession and their ploidy levels, 999,258 single dosage SNPs were identified and most loci were estimated as largely homozygotes. An average of 34,397 haplotype blocks for each accession was inferred. The highest divergence time among the Saccharum spp. was estimated as 1.2 million years ago (MYA). Saccharum spp. diverged from Erianthus and Sorghum approximately 5 and 6 MYA, respectively. Furthermore, the target enrichment sequencing approach provided an effective way to discover and catalog natural allelic variation in highly polyploid or heterozygous genomes.« less

  18. Accessibility Attributes of Blood Glucose Meter and Home Blood Pressure Monitor Displays for Visually Impaired Persons

    PubMed Central

    Blubaugh, Morgan V.; Uslan, Mark M.

    2012-01-01

    The vast majority of diabetes-related self-management technology utilizes small visual displays (SVDs) that often produce a low level of contrast and suffer from high levels of reflection (glare). This is a major accessibility issue for the 3.5 million Americans with diabetes who have reduced vision. The purpose of this article is to gather comparative data on the key display attributes of the SVDs used in blood glucose meters (BGMs) and home blood pressure monitors (HBPMs) on the market today and determine which displays offer the best prospect for being accessible to people with reduced vision. Nine BGMs and eight HBPMs were identified for this study on the basis of amount of devices sold, full-functionality speech output, and advanced display technologies. An optical instrumentation system obtained contrast, reflection (glare), and font height measurements for all 17 displays. The contrast, reflection, and font-height values for the BGMs and HBPMs varied greatly between models. The Michelson contrast values for the BGMs ranged from 11% to 98% and font heights ranged 0.39–1.00 in. for the measurement results. The HBPMs had Michelson contrast values ranging 55–96% and font height ranging 0.28–0.94 in. for the measurement results. Due largely to the lack of display design standards for the technical requirements of SVDs, there is tremendous variability in the quality and readability of BGM and HBPM displays. There were two BGMs and one HBPM that exhibited high-contrast values and large font heights, but most of the devices exhibited either poor contrast or exceptionally high reflection. PMID:22538132

  19. Natural Allelic Variations in Highly Polyploidy Saccharum Complex

    DOE PAGES

    Song, Jian; Yang, Xiping; Resende, Jr., Marcio F. R.; ...

    2016-06-08

    Sugarcane ( Saccharum spp.) is an important sugar and biofuel crop with high polyploid and complex genomes. The Saccharum complex, comprised of Saccharum genus and a few related genera, are important genetic resources for sugarcane breeding. A large amount of natural variation exists within the Saccharum complex. Though understanding their allelic variation has been challenging, it is critical to dissect allelic structure and to identify the alleles controlling important traits in sugarcane. To characterize natural variations in Saccharum complex, a target enrichment sequencing approach was used to assay 12 representative germplasm accessions. In total, 55,946 highly efficient probes were designedmore » based on the sorghum genome and sugarcane unigene set targeting a total of 6 Mb of the sugarcane genome. A pipeline specifically tailored for polyploid sequence variants and genotype calling was established. BWAmem and sorghum genome approved to be an acceptable aligner and reference for sugarcane target enrichment sequence analysis, respectively. Genetic variations including 1,166,066 non-redundant SNPs, 150,421 InDels, 919 gene copy number variations, and 1,257 gene presence/absence variations were detected. SNPs from three different callers (Samtools, Freebayes, and GATK) were compared and the validation rates were nearly 90%. Based on the SNP loci of each accession and their ploidy levels, 999,258 single dosage SNPs were identified and most loci were estimated as largely homozygotes. An average of 34,397 haplotype blocks for each accession was inferred. The highest divergence time among the Saccharum spp. was estimated as 1.2 million years ago (MYA). Saccharum spp. diverged from Erianthus and Sorghum approximately 5 and 6 MYA, respectively. Furthermore, the target enrichment sequencing approach provided an effective way to discover and catalog natural allelic variation in highly polyploid or heterozygous genomes.« less

  20. Accessibility attributes of blood glucose meter and home blood pressure monitor displays for visually impaired persons.

    PubMed

    Blubaugh, Morgan V; Uslan, Mark M

    2012-03-01

    The vast majority of diabetes-related self-management technology utilizes small visual displays (SVDs) that often produce a low level of contrast and suffer from high levels of reflection (glare). This is a major accessibility issue for the 3.5 million Americans with diabetes who have reduced vision. The purpose of this article is to gather comparative data on the key display attributes of the SVDs used in blood glucose meters (BGMs) and home blood pressure monitors (HBPMs) on the market today and determine which displays offer the best prospect for being accessible to people with reduced vision. Nine BGMs and eight HBPMs were identified for this study on the basis of amount of devices sold, fullfunctionality speech output, and advanced display technologies. An optical instrumentation system obtained contrast, reflection (glare), and font height measurements for all 17 displays. The contrast, reflection, and font-height values for the BGMs and HBPMs varied greatly between models. The Michelson contrast values for the BGMs ranged from 11% to 98% and font heights ranged 0.39-1.00 in. for the measurement results. The HBPMs had Michelson contrast values ranging 55-96% and font height ranging 0.28-0.94 in. for the measurement results. Due largely to the lack of display design standards for the technical requirements of SVDs, there is tremendous variability in the quality and readability of BGM and HBPM displays. There were two BGMs and one HBPM that exhibited high-contrast values and large font heights, but most of the devices exhibited either poor contrast or exceptionally high reflection. © 2012 Diabetes Technology Society.

  1. Conducting a fully mobile and randomised clinical trial for depression: access, engagement and expense.

    PubMed

    Anguera, Joaquin A; Jordan, Joshua T; Castaneda, Diego; Gazzaley, Adam; Areán, Patricia A

    2016-01-01

    Advances in mobile technology have resulted in federal and industry-level initiatives to facilitate large-scale clinical research using smart devices. Although the benefits of technology to expand data collection are obvious, assumptions about the reach of mobile research methods ( access ), participant willingness to engage in mobile research protocols ( engagement ), and the cost of this research ( cost ) remain untested. To assess the feasibility of a fully mobile randomised controlled trial using assessments and treatments delivered entirely through mobile devices to depressed individuals. Using a web-based research portal, adult participants with depression who also owned a smart device were screened, consented and randomised to 1 of 3 mental health apps for treatment. Assessments of self-reported mood and cognitive function were conducted at baseline, 4, 8 and 12 weeks. Physical and social activity was monitored daily using passively collected phone use data. All treatment and assessment tools were housed on each participant's smart phone or tablet. A cognitive training application, an application based on problem-solving therapy, and a mobile-sensing application promoting daily activities. Access : We screened 2923 people and enrolled 1098 participants in 5 months. The sample characteristics were comparable to the 2013 US census data. Recruitment via Craigslist.org yielded the largest sample. Engagement : Study engagement was high during the first 2 weeks of treatment, falling to 44% adherence by the 4th week. Cost : The total amount spent on for this project, including staff costs and β testing, was $314 264 over 2 years. These findings suggest that mobile randomised control trials can recruit large numbers of participants in a short period of time and with minimal cost, but study engagement remains challenging. NCT00540865.

  2. Validation of a simplified food frequency questionnaire for the assessment of dietary habits in Iranian adults: Isfahan Healthy Heart Program, Iran.

    PubMed

    Mohammadifard, Noushin; Sajjadi, Firouzeh; Maghroun, Maryam; Alikhasi, Hassan; Nilforoushzadeh, Farzaneh; Sarrafzadegan, Nizal

    2015-03-01

    Dietary assessment is the first step of dietary modification in community-based interventional programs. This study was performed to validate a simple food frequency questionnaire (SFFQ) for assessment of selected food items in epidemiological studies with a large sample size as well as community trails. This validation study was carried out on 264 healthy adults aged ≥ 41 years old living in 3 district central of Iran, including Isfahan, Najafabad, and Arak. Selected food intakes were assessed using a 48-item food frequency questionnaire (FFQ). The FFQ was interviewer-administered, which was completed twice; at the beginning of the study and 2 weeks thereafter. The validity of this SFFQ was examined compared to estimated amount by single 24 h dietary recall and 2 days dietary record. Validation of the FFQ was determined using Spearman correlation coefficients between daily frequency consumption of food groups as assessed by the FFQ and the qualitative amount of daily food groups intake accessed by dietary reference method was applied to evaluate validity. Intraclass correlation coefficients (ICC) were used to determine the reproducibility. Spearman correlation coefficient between the estimated amount of food groups intake by examined and reference methods ranged from 0.105 (P = 0.378) in pickles to 0.48 (P < 0.001) in plant protein. ICC for reproducibility of FFQ were between 0.47-0.69 in different food groups (P < 0.001). The designed SFFQ has a good relative validity and reproducibility for assessment of selected food groups intake. Thus, it can serve as a valid tool in epidemiological studies and clinical trial with large participants.

  3. Environmental Justice and Information Technologies: Overcoming the Information-Access Paradox in Urban Communities.

    ERIC Educational Resources Information Center

    Kellogg, Wendy A.; Mathur, Anjali

    2003-01-01

    Studies suggest that urban residents in low-income and minority communities are subject to an unequal amount of environmental pollution and inequitable enforcement practices. Projects such as Sustainable Cleveland show that key components of implementing policies are access to Internet-based information and participation community-based…

  4. 45 CFR 150.315 - Amount of penalty-General.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Amount of penalty-General. 150.315 Section 150.315 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS CMS Enforcement With Respect to Issuers and Non-Federal...

  5. From Open Geographical Data to Tangible Maps: Improving the Accessibility of Maps for Visually Impaired People

    NASA Astrophysics Data System (ADS)

    Ducasse, J.; Macé, M.; Jouffrais, C.

    2015-08-01

    Visual maps must be transcribed into (interactive) raised-line maps to be accessible for visually impaired people. However, these tactile maps suffer from several shortcomings: they are long and expensive to produce, they cannot display a large amount of information, and they are not dynamically modifiable. A number of methods have been developed to automate the production of raised-line maps, but there is not yet any tactile map editor on the market. Tangible interactions proved to be an efficient way to help a visually impaired user manipulate spatial representations. Contrary to raised-line maps, tangible maps can be autonomously constructed and edited. In this paper, we present the scenarios and the main expected contributions of the AccessiMap project, which is based on the availability of many sources of open spatial data: 1/ facilitating the production of interactive tactile maps with the development of an open-source web-based editor; 2/ investigating the use of tangible interfaces for the autonomous construction and exploration of a map by a visually impaired user.

  6. p3d--Python module for structural bioinformatics.

    PubMed

    Fufezan, Christian; Specht, Michael

    2009-08-21

    High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files). p3d's strength arises from the combination of a) very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP) tree, b) set theory and c) functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.

  7. Analysis of the request patterns to the NSSDC on-line archive

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore

    1994-01-01

    NASA missions, both for earth science and for space science, collect huge amounts of data, and the rate at which data is being gathered is increasing. For example, the EOSDIS project is expected to collect petabytes per year. In addition, these archives are being made available to remote users over the Internet. The ability to manage the growth of the size and request activity of scientific archives depends on an understanding of the access patterns of scientific users. The National Space Science Data Center (NSSDC) of NASA Goddard Space Flight Center has run their on-line mass storage archive of space data, the National Data Archive and Distribution Service (NDADS), since November 1991. A large world-wide space research community makes use of NSSDC, requesting more than 20,000 files per month. Since the initiation of their service, they have maintained log files which record all accesses the archive. In this report, we present an analysis of the NDADS log files. We analyze the log files, and discuss several issues, including caching, reference patterns, clustering, and system loading.

  8. Elastic extension of a local analysis facility on external clouds for the LHC experiments

    NASA Astrophysics Data System (ADS)

    Ciaschini, V.; Codispoti, G.; Rinaldi, L.; Aiftimiei, D. C.; Bonacorsi, D.; Calligola, P.; Dal Pra, S.; De Girolamo, D.; Di Maria, R.; Grandi, C.; Michelotto, D.; Panella, M.; Taneja, S.; Semeria, F.

    2017-10-01

    The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, the LHC experiments are exploring the opportunity to access Cloud resources provided by external partners or commercial providers. In this work we present the proof of concept of the elastic extension of a local analysis facility, specifically the Bologna Tier-3 Grid site, for the LHC experiments hosted at the site, on an external OpenStack infrastructure. We focus on the Cloud Bursting of the Grid site using DynFarm, a newly designed tool that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on an OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage.

  9. DIRAC distributed secure framework

    NASA Astrophysics Data System (ADS)

    Casajus, A.; Graciani, R.; LHCb DIRAC Team

    2010-04-01

    DIRAC, the LHCb community Grid solution, provides access to a vast amount of computing and storage resources to a large number of users. In DIRAC users are organized in groups with different needs and permissions. In order to ensure that only allowed users can access the resources and to enforce that there are no abuses, security is mandatory. All DIRAC services and clients use secure connections that are authenticated using certificates and grid proxies. Once a client has been authenticated, authorization rules are applied to the requested action based on the presented credentials. These authorization rules and the list of users and groups are centrally managed in the DIRAC Configuration Service. Users submit jobs to DIRAC using their local credentials. From then on, DIRAC has to interact with different Grid services on behalf of this user. DIRAC has a proxy management service where users upload short-lived proxies to be used when DIRAC needs to act on behalf of them. Long duration proxies are uploaded by users to a MyProxy service, and DIRAC retrieves new short delegated proxies when necessary. This contribution discusses the details of the implementation of this security infrastructure in DIRAC.

  10. Minimising hydrogen sulphide generation during steam assisted production of heavy oil

    PubMed Central

    Montgomery, Wren; Sephton, Mark A.; Watson, Jonathan S.; Zeng, Huang; Rees, Andrew C.

    2015-01-01

    The majority of global petroleum is in the form of highly viscous heavy oil. Traditionally heavy oil in sands at shallow depths is accessed by large scale mining activities. Recently steam has been used to allow heavy oil extraction with greatly reduced surface disturbance. However, in situ thermal recovery processes can generate hydrogen sulphide, high levels of which are toxic to humans and corrosive to equipment. Avoiding hydrogen sulphide production is the best possible mitigation strategy. Here we use laboratory aquathermolysis to reproduce conditions that may be experienced during thermal extraction. The results indicate that hydrogen sulphide generation occurs within a specific temperature and pressure window and corresponds to chemical and physical changes in the oil. Asphaltenes are identified as the major source of sulphur. Our findings reveal that for high sulphur heavy oils, the generation of hydrogen sulphide during steam assisted thermal recovery is minimal if temperature and pressure are maintained within specific criteria. This strict pressure and temperature dependence of hydrogen sulphide release can allow access to the world's most voluminous oil deposits without generating excessive amounts of this unwanted gas product. PMID:25670085

  11. Minimising hydrogen sulphide generation during steam assisted production of heavy oil

    NASA Astrophysics Data System (ADS)

    Montgomery, Wren; Sephton, Mark A.; Watson, Jonathan S.; Zeng, Huang; Rees, Andrew C.

    2015-02-01

    The majority of global petroleum is in the form of highly viscous heavy oil. Traditionally heavy oil in sands at shallow depths is accessed by large scale mining activities. Recently steam has been used to allow heavy oil extraction with greatly reduced surface disturbance. However, in situ thermal recovery processes can generate hydrogen sulphide, high levels of which are toxic to humans and corrosive to equipment. Avoiding hydrogen sulphide production is the best possible mitigation strategy. Here we use laboratory aquathermolysis to reproduce conditions that may be experienced during thermal extraction. The results indicate that hydrogen sulphide generation occurs within a specific temperature and pressure window and corresponds to chemical and physical changes in the oil. Asphaltenes are identified as the major source of sulphur. Our findings reveal that for high sulphur heavy oils, the generation of hydrogen sulphide during steam assisted thermal recovery is minimal if temperature and pressure are maintained within specific criteria. This strict pressure and temperature dependence of hydrogen sulphide release can allow access to the world's most voluminous oil deposits without generating excessive amounts of this unwanted gas product.

  12. Minimising hydrogen sulphide generation during steam assisted production of heavy oil.

    PubMed

    Montgomery, Wren; Sephton, Mark A; Watson, Jonathan S; Zeng, Huang; Rees, Andrew C

    2015-02-11

    The majority of global petroleum is in the form of highly viscous heavy oil. Traditionally heavy oil in sands at shallow depths is accessed by large scale mining activities. Recently steam has been used to allow heavy oil extraction with greatly reduced surface disturbance. However, in situ thermal recovery processes can generate hydrogen sulphide, high levels of which are toxic to humans and corrosive to equipment. Avoiding hydrogen sulphide production is the best possible mitigation strategy. Here we use laboratory aquathermolysis to reproduce conditions that may be experienced during thermal extraction. The results indicate that hydrogen sulphide generation occurs within a specific temperature and pressure window and corresponds to chemical and physical changes in the oil. Asphaltenes are identified as the major source of sulphur. Our findings reveal that for high sulphur heavy oils, the generation of hydrogen sulphide during steam assisted thermal recovery is minimal if temperature and pressure are maintained within specific criteria. This strict pressure and temperature dependence of hydrogen sulphide release can allow access to the world's most voluminous oil deposits without generating excessive amounts of this unwanted gas product.

  13. Multi-terabyte EIDE disk arrays running Linux RAID5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case ofmore » multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.« less

  14. Episodic, generalized, and semantic memory tests: switching and strength effects.

    PubMed

    Humphreys, Michael S; Murray, Krista L

    2011-09-01

    We continue the process of investigating the probabilistic paired associate paradigm in an effort to understand the memory access control processes involved and to determine whether the memory structure produced is in transition between episodic and semantic memory. In this paradigm two targets are probabilistically paired with a cue across a large number of short lists. Participants can recall the target paired with the cue in the most recent list (list specific test), produce the first of the two targets that have been paired with that cue to come to mind (generalised test), and produce a free association response (semantic test). Switching between a generalised test and a list specific test did not produce a switching cost indicating a general similarity in the control processes involved. In addition, there was evidence for a dissociation between two different strength manipulations (amount of study time and number of cue-target pairings) such that number of pairings influenced the list specific, generalised and the semantic test but amount of study time only influenced the list specific and generalised test. © 2011 Canadian Psychological Association

  15. JGI Plant Genomics Gene Annotation Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Shengqiang; Rokhsar, Dan; Goodstein, David

    2014-07-14

    Plant genomes vary in size and are highly complex with a high amount of repeats, genome duplication and tandem duplication. Gene encodes a wealth of information useful in studying organism and it is critical to have high quality and stable gene annotation. Thanks to advancement of sequencing technology, many plant species genomes have been sequenced and transcriptomes are also sequenced. To use these vastly large amounts of sequence data to make gene annotation or re-annotation in a timely fashion, an automatic pipeline is needed. JGI plant genomics gene annotation pipeline, called integrated gene call (IGC), is our effort toward thismore » aim with aid of a RNA-seq transcriptome assembly pipeline. It utilizes several gene predictors based on homolog peptides and transcript ORFs. See Methods for detail. Here we present genome annotation of JGI flagship green plants produced by this pipeline plus Arabidopsis and rice except for chlamy which is done by a third party. The genome annotations of these species and others are used in our gene family build pipeline and accessible via JGI Phytozome portal whose URL and front page snapshot are shown below.« less

  16. Access to mycorrhizal networks and roots of trees: importance for seedling survival and resource transfer.

    PubMed

    Teste, François P; Simard, Suzanne W; Durall, Daniel M; Guy, Robert D; Jones, Melanie D; Schoonmaker, Amanda L

    2009-10-01

    Mycorrhizal networks (MNs) are fungal hyphae that connect roots of at least two plants. It has been suggested that these networks are ecologically relevant because they may facilitate interplant resource transfer and improve regeneration dynamics. This study investigated the effects of MNs on seedling survival, growth and physiological responses, interplant resource (carbon and nitrogen) transfer, and ectomycorrhizal (EM) fungal colonization of seedlings by trees in dry interior Douglas-fir (Pseudotsuga menziesii var. glauca) forests. On a large, recently harvested site that retained some older trees, we established 160 isolated plots containing pairs of older Douglas-fir "donor" trees and either manually sown seed or planted Douglas-fir "receiver" seedlings. Seed- and greenhouse-grown seedlings were sown and planted into four mesh treatments that served to restrict MN access (i.e., planted into mesh bags with 0.5-, 35-, 250-microm pores, or without mesh). Older trees were pulse labeled with carbon (13CO2) and nitrogen (15NH4(15)NO3) to quantify resource transfer. After two years, seedlings grown from seed in the field had the greatest survival and received the greatest amounts of transferred carbon (0.0063% of donor photo-assimilates) and nitrogen (0.0018%) where they were grown without mesh; however, planted seedlings were not affected by access to tree roots and hyphae. Size of "donor" trees was inversely related to the amount of carbon transferred to seedlings. The potential for MNs to form was high (based on high similarity of EM communities between hosts), and MN-mediated colonization appeared only to be important for seedlings grown from seed in the field. These results demonstrate that MNs and mycorrhizal roots of trees may be ecologically important for natural regeneration in dry forests, but it is still uncertain whether resource transfer is an important mechanism underlying seedling establishment.

  17. Patient Organizations’ Funding from Pharmaceutical Companies: Is Disclosure Clear, Complete and Accessible to the Public? An Italian Survey

    PubMed Central

    Colombo, Cinzia; Mosconi, Paola; Villani, Walter; Garattini, Silvio

    2012-01-01

    Background Many patients’ and consumers’ organizations accept drug industry funding to support their activities. As drug companies and patient groups move closer, disclosure become essential for transparency, and the internet could be a useful means of making sponsorship information accessible to the public. This survey aims to assess the transparency of a large group of Italian patient and consumer groups and a group of pharmaceutical companies, focusing on their websites. Methodology/Principal Findings Patient and consumer groups were selected from those stated to be sponsored by a group of pharmaceutical companies on their websites. The websites were examined using two forms with principal (name of drug companies providing funds, amount of funding) and secondary indicators of transparency (section where sponsors are disclosed, update of sponsorship). Principal indicators were applied independently by two reviewers to the patient and consumer groups’ websites. Discordances were solved by discussion. One hundred fifty-seven Italian patient and consumer groups and 17 drug companies were considered. Thirteen drug companies (76%) named at least one group funded, on their Italian websites. Of these, four (31%) indicated the activities sponsored and two (15%) the amount of funding. Of the 157 patient and consumer groups, 46 (29%) named at least one pharmaceutical company as providing funds. Three (6%) reported the amount of funding, 25 (54%) the activities funded, none the proportion of income derived from drug companies. Among the groups naming pharmaceutical company sponsors, 15 (33%) declared them in a dedicated section, five (11%) on the home page, the others in the financial report or other sections. Conclusions/Significance Disclosure of funds is scarce on Italian patient and consumer groups’ websites. The levels of transparency need to be improved. Disclosure of patient and consumer groups provided with funds is frequent on Italian pharmaceutical companies’ websites, but information are often not complete. PMID:22590498

  18. Towards energy aware optical networks and interconnects

    NASA Astrophysics Data System (ADS)

    Glesk, Ivan; Osadola, Tolulope; Idris, Siti

    2013-10-01

    In a today's world, information technology has been identified as one of the major factors driving economic prosperity. Datacenters businesses have been growing significantly in the past few years. The equipments in these datacenters need to be efficiently connected to each other and also to the outside world in order to enable effective exchange of information. This is why there is need for highly scalable, energy savvy and reliable network connectivity infrastructure that is capable of accommodating the large volume of data being exchanged at any time within the datacenter network and the outside network in general. These devices that can ensure such effective connectivity currently require large amount of energy in order to meet up with these increasing demands. In this paper, an overview of works being done towards realizing energy aware optical networks and interconnects for datacenters is presented. Also an OCDMA approach is discussed as potential multiple access technique for future optical network interconnections. We also presented some challenges that might inhibit effective implementation of the OCDMA multiplexing scheme.

  19. Invisible water, visible impact: groundwater use and Indian agriculture under climate change

    DOE PAGES

    Zaveri, Esha; Grogan, Danielle S.; Fisher-Vanden, Karen; ...

    2016-08-03

    India is one of the world's largest food producers, making the sustainability of its agricultural system of global significance. Groundwater irrigation underpins India's agriculture, currently boosting crop production by enough to feed 170 million people. Groundwater overexploitation has led to drastic declines in groundwater levels, threatening to push this vital resource out of reach for millions of small-scale farmers who are the backbone of India's food security. Historically, losing access to groundwater has decreased agricultural production and increased poverty. We take a multidisciplinary approach to assess climate change challenges facing India's agricultural system, and to assess the effectiveness of large-scalemore » water infrastructure projects designed to meet these challenges. We find that even in areas that experience climate change induced precipitation increases, expansion of irrigated agriculture will require increasing amounts of unsustainable groundwater. Finally, the large proposed national river linking project has limited capacity to alleviate groundwater stress. Thus, without intervention, poverty and food insecurity in rural India is likely to worsen.« less

  20. Invisible water, visible impact: groundwater use and Indian agriculture under climate change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaveri, Esha; Grogan, Danielle S.; Fisher-Vanden, Karen

    India is one of the world's largest food producers, making the sustainability of its agricultural system of global significance. Groundwater irrigation underpins India's agriculture, currently boosting crop production by enough to feed 170 million people. Groundwater overexploitation has led to drastic declines in groundwater levels, threatening to push this vital resource out of reach for millions of small-scale farmers who are the backbone of India's food security. Historically, losing access to groundwater has decreased agricultural production and increased poverty. We take a multidisciplinary approach to assess climate change challenges facing India's agricultural system, and to assess the effectiveness of large-scalemore » water infrastructure projects designed to meet these challenges. We find that even in areas that experience climate change induced precipitation increases, expansion of irrigated agriculture will require increasing amounts of unsustainable groundwater. Finally, the large proposed national river linking project has limited capacity to alleviate groundwater stress. Thus, without intervention, poverty and food insecurity in rural India is likely to worsen.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berra, P.B.; Chung, S.M.; Hachem, N.I.

    This article presents techniques for managing a very large data/knowledge base to support multiple inference-mechanisms for logic programming. Because evaluation of goals can require accessing data from the extensional database, or EDB, in very general ways, one must often resort to indexing on all fields of the extensional database facts. This presents a formidable management problem in that the index data may be larger than the EDB itself. This problem becomes even more serious in this case of very large data/knowledge bases (hundreds of gigabytes), since considerably more hardware will be required to process and store the index data. Inmore » order to reduce the amount of index data considerably without losing generality, the authors form a surrogate file, which is a hashing transformation of the facts. Superimposed code words (SCW), concatenated code words (CCW), and transformed inverted lists (TIL) are possible structures for the surrogate file. since these transformations are quite regular and compact, the authors consider possible computer architecture for the processing of the surrogate file.« less

  2. GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering

    PubMed Central

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2016-01-01

    Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads. PMID:27482905

  3. Arcing in LEO: Does the Whole Array Discharge?

    NASA Technical Reports Server (NTRS)

    Ferguson, Dale C.; Vayner, Boris V.; Galofaro, Joel T.; Hillard, G. Barry

    2005-01-01

    The conventional wisdom about solar array arcing in LEO is that only the parts of the solar array that are swept over by the arc-generated plasma front are discharged in the initial arc. This limits the amount of energy that can be discharged. Recent work done at the NASA Glenn Research Center has shown that this idea is mistaken. In fact, the capacitance of the entire solar array may be discharged, which for large arrays leads to very large and possibly debilitating arcs, even if no sustained arc occurs. We present the laboratory work that conclusively demonstrates this fact by using a grounded plate that prevents the arc-plasma front from reaching certain array strings. Finally, we discuss the dependence of arc strength and arc pulse width on the capacitance that is discharged, and provide a physical mechanism for discharge of the entire array, even when parts of the array are not accessible to the arc-plasma front. Mitigation techniques are also presented.

  4. Anonymizing and Sharing Medical Text Records

    PubMed Central

    Li, Xiao-Bai; Qin, Jialun

    2017-01-01

    Health information technology has increased accessibility of health and medical data and benefited medical research and healthcare management. However, there are rising concerns about patient privacy in sharing medical and healthcare data. A large amount of these data are in free text form. Existing techniques for privacy-preserving data sharing deal largely with structured data. Current privacy approaches for medical text data focus on detection and removal of patient identifiers from the data, which may be inadequate for protecting privacy or preserving data quality. We propose a new systematic approach to extract, cluster, and anonymize medical text records. Our approach integrates methods developed in both data privacy and health informatics fields. The key novel elements of our approach include a recursive partitioning method to cluster medical text records based on the similarity of the health and medical information and a value-enumeration method to anonymize potentially identifying information in the text data. An experimental study is conducted using real-world medical documents. The results of the experiments demonstrate the effectiveness of the proposed approach. PMID:29569650

  5. Invisible water, visible impact: groundwater use and Indian agriculture under climate change

    NASA Astrophysics Data System (ADS)

    Zaveri, Esha; Grogan, Danielle S.; Fisher-Vanden, Karen; Frolking, Steve; Lammers, Richard B.; Wrenn, Douglas H.; Prusevich, Alexander; Nicholas, Robert E.

    2016-08-01

    India is one of the world’s largest food producers, making the sustainability of its agricultural system of global significance. Groundwater irrigation underpins India’s agriculture, currently boosting crop production by enough to feed 170 million people. Groundwater overexploitation has led to drastic declines in groundwater levels, threatening to push this vital resource out of reach for millions of small-scale farmers who are the backbone of India’s food security. Historically, losing access to groundwater has decreased agricultural production and increased poverty. We take a multidisciplinary approach to assess climate change challenges facing India’s agricultural system, and to assess the effectiveness of large-scale water infrastructure projects designed to meet these challenges. We find that even in areas that experience climate change induced precipitation increases, expansion of irrigated agriculture will require increasing amounts of unsustainable groundwater. The large proposed national river linking project has limited capacity to alleviate groundwater stress. Thus, without intervention, poverty and food insecurity in rural India is likely to worsen.

  6. A Streaming Content Distribution Network for E-Learning Support

    ERIC Educational Resources Information Center

    Esteve, M.; Molina, B.; Palau, C.; Fortino, G.

    2006-01-01

    To date e-Learning material has usually been accessed and delivered through a central web server. As the number of users, the amount of information, the frequency of accesses and the volume of data increase, together with the introduction of multimedia streaming applications, a decentralized content distribution architecture is necessary. In this…

  7. 75 FR 17622 - Equal Access to Justice Act Implementation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-07

    ... enforceable by the Director of FHFA until such regulations are modified, terminated, set aside, or superseded by the Director of FHFA.\\2\\ \\2\\ See section 1302 and section 1312 of HERA. B. Equal Access to Justice... demand for a lesser amount. Director would be defined as the Director of the Federal Housing Finance...

  8. Searching to Translate and Translating to Search: When Information Retrieval Meets Machine Translation

    ERIC Educational Resources Information Center

    Ture, Ferhan

    2013-01-01

    With the adoption of web services in daily life, people have access to tremendous amounts of information, beyond any human's reading and comprehension capabilities. As a result, search technologies have become a fundamental tool for accessing information. Furthermore, the web contains information in multiple languages, introducing another barrier…

  9. The Accessible Pantry: Food Identification Tips, Tools, and Techniques

    ERIC Educational Resources Information Center

    Sokol-McKay, Debra A.; Michels, Dianne

    2006-01-01

    For individuals with visual impairments, poorly designed labels can be barriers to receiving safe and independent access to important information about products in daily use. The authors discuss how organization and proper lighting can reduce the amount of labeling needed on food products and indicate how individuals with visual impairments can…

  10. Designing Hypercontextualized Games: A Case Study with LieksaMyst

    ERIC Educational Resources Information Center

    Sedano, Carolina Islas; Sutinen, Erkki; Vinni, Mikko; Laine, Teemu H.

    2012-01-01

    Digital technology empowers one to access vast amounts of on-line data. From a learning perspective, however, it is difficult to access meaningful on-site information within a given context. The Hypercontextualized Game (HCG) design model interweaves on-site resources, translated as content, and the digital game. As a local game design process,…

  11. Public Internet Access Points (PIAPs) and Their Social Impact: A Case Study from Turkey

    ERIC Educational Resources Information Center

    Afacan, Gulgun; Er, Erkan; Arifoglu, Ali

    2013-01-01

    Building public Internet access points (PIAPs) is a significant contribution of governments towards achieving an information society. While many developing countries are investing great amounts to establish PIAPs today, people may not use PIAPs effectively. Yet, the successful implementation of PIAPs is the result of citizens' acceptance to use…

  12. Internet Use and Cognitive Development: A Theoretical Framework

    ERIC Educational Resources Information Center

    Johnson, Genevieve

    2006-01-01

    The number of children and adolescents accessing the Internet as well as the amount of time online are steadily increasing. The most common online activities include playing video games, accessing web sites, and communicating via chat rooms, email, and instant messaging. A theoretical framework for understanding the effects of Internet use on…

  13. Pay Big to Publish Fast: Academic Journal Rackets

    ERIC Educational Resources Information Center

    Truth, Frank

    2012-01-01

    In the context of open-access (OA) academic publishing, the mounting pressure cross global academe to publish or perish has spawned an exponentially growing number of dodgy academic e-journals charging high fees to authors, often US$300-650, and even triple that amount, promising super-fast processing and publication open-access (OA) online.…

  14. Not All Insurance Is Equal: Differential Treatment and Health Outcomes by Insurance Coverage Among Nonelderly Adult Patients With Heart Attack.

    PubMed

    Niedzwiecki, Matthew J; Hsia, Renee Y; Shen, Yu-Chu

    2018-06-05

    The Affordable Care Act has provided health insurance to a large portion of the uninsured in the United States. However, different types of health insurance provide varying amounts of reimbursements to providers, which may lead to different types of treatment, potentially worsening health outcomes in patients covered by low-reimbursement insurance plans, such as Medicaid. The objective was to determine differences in access, treatment, and health outcomes by insurance type, using hospital fixed effects. We conducted a multivariate regression analysis using patient-level data for nonelderly adult patients with acute myocardial infarction in California from January 1, 2001, to December 31, 2014, as well as hospital-level information to control for differences between hospitals. The probability of Medicaid-insured and uninsured patients having access to catheterization laboratory was higher by 4.50 and 3.75 percentage points, respectively, relative to privately insured patients. When controlling for access to percutaneous coronary intervention facilities, however, Medicaid-insured and uninsured patients had a 4.24- and 0.85-percentage point lower probability, respectively, in receiving percutaneous coronary intervention treatment compared with privately insured patients. They also had higher mortality and readmission rates relative to privately insured patients. Although Medicaid-insured and uninsured patients with acute myocardial infarction had better access to catheterization laboratories, they had significantly lower probabilities of receiving percutaneous coronary intervention treatment and a higher likelihood of death and readmission compared with privately insured patients. This provides empirical evidence that treatment received and health outcomes strongly vary between Medicaid-insured, uninsured, and privately insured patients, with Medicaid-insured patients most disproportionately affected, despite having better access to cardiac technology. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  15. A multi-level intervention in worksites to increase fruit and vegetable access and intake: Rationale, design and methods of the 'Good to Go' cluster randomized trial.

    PubMed

    Risica, Patricia M; Gorham, Gemma; Dionne, Laura; Nardi, William; Ng, Doug; Middler, Reese; Mello, Jennifer; Akpolat, Rahmet; Gettens, Katelyn; Gans, Kim M

    2018-02-01

    Fruit and vegetable (F&V) consumption is an important contributor to chronic disease prevention. However, most Americans do not eat adequate amounts. The worksite is an advantageous setting to reach large, diverse segments of the population with interventions to increase F&V intake, but research gaps exist. No studies have evaluated the implementation of mobile F&V markets at worksites nor compared the effectiveness of such markets with or without nutrition education. This paper describes the protocol for Good to Go (GTG), a cluster randomized trial to evaluate F&V intake change in employees from worksites randomized into three experimental arms: discount, fresh F&V markets (Access Only arm); markets plus educational components including campaigns, cooking demonstrations, videos, newsletters, and a web site (Access Plus arm); and an attention placebo comparison intervention on physical activity and stress reduction (Comparison). Secondary aims include: 1) Process evaluation to determine costs, reach, fidelity, and dose as well as the relationship of these variables with changes in F&V intake; 2) Applying a mediating variable framework to examine relationships of psychosocial factors/determinants with changes in F&V consumption; and 3) Cost effectiveness analysis of the different intervention arms. The GTG study will fill important research gaps in the field by implementing a rigorous cluster randomized trial to evaluate the efficacy of an innovative environmental intervention providing access and availability to F&V at the worksite and whether this access intervention is further enhanced by accompanying educational interventions. GTG will provide an important contribution to public health research and practice. Trial registration number NCT02729675, ClinicalTrials.gov. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  16. A multi-level intervention in worksites to increase fruit and vegetable access and intake: Rationale, design and methods of the ‘Good to Go’ cluster randomized trial

    PubMed Central

    Risica, Patricia M.; Gorham, Gemma; Dionne, Laura; Nardi, William; Ng, Doug; Middler, Reese; Mello, Jennifer; Akpolat, Rahmet; Gettens, Katelyn; Gans, Kim M.

    2018-01-01

    Background Fruit and vegetable (F&V) consumption is an important contributor to chronic disease prevention. However, most Americans do not eat adequate amounts. The worksite is an advantageous setting to reach large, diverse segments of the population with interventions to increase F&V intake, but research gaps exist. No studies have evaluated the implementation of mobile F&V markets at worksites nor compared the effectiveness of such markets with or without nutrition education. Methods This paper describes the protocol for Good to Go (GTG), a cluster randomized trial to evaluate F&V intake change in employees from worksites randomized into three experimental arms: discount, fresh F&V markets (Access Only arm); markets plus educational components including campaigns, cooking demonstrations, videos, newsletters, and a web site (Access Plus arm); and an attention placebo comparison intervention on physical activity and stress reduction (Comparison). Secondary aims include: 1) Process evaluation to determine costs, reach, fidelity, and dose as well as the relationship of these variables with changes in F&V intake; 2) Applying a mediating variable framework to examine relationships of psychosocial factors/determinants with changes in F&V consumption; and 3) Cost effectiveness analysis of the different intervention arms. Discussion The GTG study will fill important research gaps in the field by implementing a rigorous cluster randomized trial to evaluate the efficacy of an innovative environmental intervention providing access and availability to F&V at the worksite and whether this access intervention is further enhanced by accompanying educational interventions. GTG will provide an important contribution to public health research and practice. Trial registration number NCT02729675, ClinicalTrials.gov PMID:29242108

  17. Does the perception that stress affects health matter? The association with health and mortality.

    PubMed

    Keller, Abiola; Litzelman, Kristin; Wisk, Lauren E; Maddox, Torsheika; Cheng, Erika Rose; Creswell, Paul D; Witt, Whitney P

    2012-09-01

    This study sought to examine the relationship among the amount of stress, the perception that stress affects health, and health and mortality outcomes in a nationally representative sample of U.S. adults. Data from the 1998 National Health Interview Survey were linked to prospective National Death Index mortality data through 2006. Separate logistic regression models were used to examine the factors associated with current health status and psychological distress. Cox proportional hazard models were used to determine the impact of perceiving that stress affects health on all-cause mortality. Each model specifically examined the interaction between the amount of stress and the perception that stress affects health, controlling for sociodemographic, health behavior, and access to health care factors. 33.7% of nearly 186 million (unweighted n = 28,753) U.S. adults perceived that stress affected their health a lot or to some extent. Both higher levels of reported stress and the perception that stress affects health were independently associated with an increased likelihood of worse health and mental health outcomes. The amount of stress and the perception that stress affects health interacted such that those who reported a lot of stress and that stress impacted their health a lot had a 43% increased risk of premature death (HR = 1.43, 95% CI [1.2, 1.7]). High amounts of stress and the perception that stress impacts health are each associated with poor health and mental health. Individuals who perceived that stress affects their health and reported a large amount of stress had an increased risk of premature death. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  18. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.

  19. Destruction of Navy Hazardous Wastes by Supercritical Water Oxidation

    DTIC Science & Technology

    1994-08-01

    cleaning and derusting (nitrite and citric acid solutions), electroplating ( acids and metal bearing solutions), electronics and refrigeration... acid forming chemical species or that contain a large amount of dissolved solids present a challenge to current SCWO •-chnology. Approved for public...Waste streams that contain a large amount of mineral- acid forming chemical species or that contain a large amount of dissolved solids present a challenge

  20. Empirical relationships between tree fall and landscape-level amounts of logging and fire

    PubMed Central

    Blanchard, Wade; Blair, David; McBurney, Lachlan; Stein, John; Banks, Sam C.

    2018-01-01

    Large old trees are critically important keystone structures in forest ecosystems globally. Populations of these trees are also in rapid decline in many forest ecosystems, making it important to quantify the factors that influence their dynamics at different spatial scales. Large old trees often occur in forest landscapes also subject to fire and logging. However, the effects on the risk of collapse of large old trees of the amount of logging and fire in the surrounding landscape are not well understood. Using an 18-year study in the Mountain Ash (Eucalyptus regnans) forests of the Central Highlands of Victoria, we quantify relationships between the probability of collapse of large old hollow-bearing trees at a site and the amount of logging and the amount of fire in the surrounding landscape. We found the probability of collapse increased with an increasing amount of logged forest in the surrounding landscape. It also increased with a greater amount of burned area in the surrounding landscape, particularly for trees in highly advanced stages of decay. The most likely explanation for elevated tree fall with an increasing amount of logged or burned areas in the surrounding landscape is change in wind movement patterns associated with cutblocks or burned areas. Previous studies show that large old hollow-bearing trees are already at high risk of collapse in our study area. New analyses presented here indicate that additional logging operations in the surrounding landscape will further elevate that risk. Current logging prescriptions require the protection of large old hollow-bearing trees on cutblocks. We suggest that efforts to reduce the probability of collapse of large old hollow-bearing trees on unlogged sites will demand careful landscape planning to limit the amount of timber harvesting in the surrounding landscape. PMID:29474487

  1. Empirical relationships between tree fall and landscape-level amounts of logging and fire.

    PubMed

    Lindenmayer, David B; Blanchard, Wade; Blair, David; McBurney, Lachlan; Stein, John; Banks, Sam C

    2018-01-01

    Large old trees are critically important keystone structures in forest ecosystems globally. Populations of these trees are also in rapid decline in many forest ecosystems, making it important to quantify the factors that influence their dynamics at different spatial scales. Large old trees often occur in forest landscapes also subject to fire and logging. However, the effects on the risk of collapse of large old trees of the amount of logging and fire in the surrounding landscape are not well understood. Using an 18-year study in the Mountain Ash (Eucalyptus regnans) forests of the Central Highlands of Victoria, we quantify relationships between the probability of collapse of large old hollow-bearing trees at a site and the amount of logging and the amount of fire in the surrounding landscape. We found the probability of collapse increased with an increasing amount of logged forest in the surrounding landscape. It also increased with a greater amount of burned area in the surrounding landscape, particularly for trees in highly advanced stages of decay. The most likely explanation for elevated tree fall with an increasing amount of logged or burned areas in the surrounding landscape is change in wind movement patterns associated with cutblocks or burned areas. Previous studies show that large old hollow-bearing trees are already at high risk of collapse in our study area. New analyses presented here indicate that additional logging operations in the surrounding landscape will further elevate that risk. Current logging prescriptions require the protection of large old hollow-bearing trees on cutblocks. We suggest that efforts to reduce the probability of collapse of large old hollow-bearing trees on unlogged sites will demand careful landscape planning to limit the amount of timber harvesting in the surrounding landscape.

  2. MiMiR – an integrated platform for microarray data sharing, mining and analysis

    PubMed Central

    Tomlinson, Chris; Thimma, Manjula; Alexandrakis, Stelios; Castillo, Tito; Dennis, Jayne L; Brooks, Anthony; Bradley, Thomas; Turnbull, Carly; Blaveri, Ekaterini; Barton, Geraint; Chiba, Norie; Maratou, Klio; Soutter, Pat; Aitman, Tim; Game, Laurence

    2008-01-01

    Background Despite considerable efforts within the microarray community for standardising data format, content and description, microarray technologies present major challenges in managing, sharing, analysing and re-using the large amount of data generated locally or internationally. Additionally, it is recognised that inconsistent and low quality experimental annotation in public data repositories significantly compromises the re-use of microarray data for meta-analysis. MiMiR, the Microarray data Mining Resource was designed to tackle some of these limitations and challenges. Here we present new software components and enhancements to the original infrastructure that increase accessibility, utility and opportunities for large scale mining of experimental and clinical data. Results A user friendly Online Annotation Tool allows researchers to submit detailed experimental information via the web at the time of data generation rather than at the time of publication. This ensures the easy access and high accuracy of meta-data collected. Experiments are programmatically built in the MiMiR database from the submitted information and details are systematically curated and further annotated by a team of trained annotators using a new Curation and Annotation Tool. Clinical information can be annotated and coded with a clinical Data Mapping Tool within an appropriate ethical framework. Users can visualise experimental annotation, assess data quality, download and share data via a web-based experiment browser called MiMiR Online. All requests to access data in MiMiR are routed through a sophisticated middleware security layer thereby allowing secure data access and sharing amongst MiMiR registered users prior to publication. Data in MiMiR can be mined and analysed using the integrated EMAAS open source analysis web portal or via export of data and meta-data into Rosetta Resolver data analysis package. Conclusion The new MiMiR suite of software enables systematic and effective capture of extensive experimental and clinical information with the highest MIAME score, and secure data sharing prior to publication. MiMiR currently contains more than 150 experiments corresponding to over 3000 hybridisations and supports the Microarray Centre's large microarray user community and two international consortia. The MiMiR flexible and scalable hardware and software architecture enables secure warehousing of thousands of datasets, including clinical studies, from microarray and potentially other -omics technologies. PMID:18801157

  3. JUICE: a data management system that facilitates the analysis of large volumes of information in an EST project workflow.

    PubMed

    Latorre, Mariano; Silva, Herman; Saba, Juan; Guziolowski, Carito; Vizoso, Paula; Martinez, Veronica; Maldonado, Jonathan; Morales, Andrea; Caroca, Rodrigo; Cambiazo, Veronica; Campos-Vargas, Reinaldo; Gonzalez, Mauricio; Orellana, Ariel; Retamales, Julio; Meisel, Lee A

    2006-11-23

    Expressed sequence tag (EST) analyses provide a rapid and economical means to identify candidate genes that may be involved in a particular biological process. These ESTs are useful in many Functional Genomics studies. However, the large quantity and complexity of the data generated during an EST sequencing project can make the analysis of this information a daunting task. In an attempt to make this task friendlier, we have developed JUICE, an open source data management system (Apache + PHP + MySQL on Linux), which enables the user to easily upload, organize, visualize and search the different types of data generated in an EST project pipeline. In contrast to other systems, the JUICE data management system allows a branched pipeline to be established, modified and expanded, during the course of an EST project. The web interfaces and tools in JUICE enable the users to visualize the information in a graphical, user-friendly manner. The user may browse or search for sequences and/or sequence information within all the branches of the pipeline. The user can search using terms associated with the sequence name, annotation or other characteristics stored in JUICE and associated with sequences or sequence groups. Groups of sequences can be created by the user, stored in a clipboard and/or downloaded for further analyses. Different user profiles restrict the access of each user depending upon their role in the project. The user may have access exclusively to visualize sequence information, access to annotate sequences and sequence information, or administrative access. JUICE is an open source data management system that has been developed to aid users in organizing and analyzing the large amount of data generated in an EST Project workflow. JUICE has been used in one of the first functional genomics projects in Chile, entitled "Functional Genomics in nectarines: Platform to potentiate the competitiveness of Chile in fruit exportation". However, due to its ability to organize and visualize data from external pipelines, JUICE is a flexible data management system that should be useful for other EST/Genome projects. The JUICE data management system is released under the Open Source GNU Lesser General Public License (LGPL). JUICE may be downloaded from http://genoma.unab.cl/juice_system/ or http://www.genomavegetal.cl/juice_system/.

  4. JUICE: a data management system that facilitates the analysis of large volumes of information in an EST project workflow

    PubMed Central

    Latorre, Mariano; Silva, Herman; Saba, Juan; Guziolowski, Carito; Vizoso, Paula; Martinez, Veronica; Maldonado, Jonathan; Morales, Andrea; Caroca, Rodrigo; Cambiazo, Veronica; Campos-Vargas, Reinaldo; Gonzalez, Mauricio; Orellana, Ariel; Retamales, Julio; Meisel, Lee A

    2006-01-01

    Background Expressed sequence tag (EST) analyses provide a rapid and economical means to identify candidate genes that may be involved in a particular biological process. These ESTs are useful in many Functional Genomics studies. However, the large quantity and complexity of the data generated during an EST sequencing project can make the analysis of this information a daunting task. Results In an attempt to make this task friendlier, we have developed JUICE, an open source data management system (Apache + PHP + MySQL on Linux), which enables the user to easily upload, organize, visualize and search the different types of data generated in an EST project pipeline. In contrast to other systems, the JUICE data management system allows a branched pipeline to be established, modified and expanded, during the course of an EST project. The web interfaces and tools in JUICE enable the users to visualize the information in a graphical, user-friendly manner. The user may browse or search for sequences and/or sequence information within all the branches of the pipeline. The user can search using terms associated with the sequence name, annotation or other characteristics stored in JUICE and associated with sequences or sequence groups. Groups of sequences can be created by the user, stored in a clipboard and/or downloaded for further analyses. Different user profiles restrict the access of each user depending upon their role in the project. The user may have access exclusively to visualize sequence information, access to annotate sequences and sequence information, or administrative access. Conclusion JUICE is an open source data management system that has been developed to aid users in organizing and analyzing the large amount of data generated in an EST Project workflow. JUICE has been used in one of the first functional genomics projects in Chile, entitled "Functional Genomics in nectarines: Platform to potentiate the competitiveness of Chile in fruit exportation". However, due to its ability to organize and visualize data from external pipelines, JUICE is a flexible data management system that should be useful for other EST/Genome projects. The JUICE data management system is released under the Open Source GNU Lesser General Public License (LGPL). JUICE may be downloaded from or . PMID:17123449

  5. MiMiR--an integrated platform for microarray data sharing, mining and analysis.

    PubMed

    Tomlinson, Chris; Thimma, Manjula; Alexandrakis, Stelios; Castillo, Tito; Dennis, Jayne L; Brooks, Anthony; Bradley, Thomas; Turnbull, Carly; Blaveri, Ekaterini; Barton, Geraint; Chiba, Norie; Maratou, Klio; Soutter, Pat; Aitman, Tim; Game, Laurence

    2008-09-18

    Despite considerable efforts within the microarray community for standardising data format, content and description, microarray technologies present major challenges in managing, sharing, analysing and re-using the large amount of data generated locally or internationally. Additionally, it is recognised that inconsistent and low quality experimental annotation in public data repositories significantly compromises the re-use of microarray data for meta-analysis. MiMiR, the Microarray data Mining Resource was designed to tackle some of these limitations and challenges. Here we present new software components and enhancements to the original infrastructure that increase accessibility, utility and opportunities for large scale mining of experimental and clinical data. A user friendly Online Annotation Tool allows researchers to submit detailed experimental information via the web at the time of data generation rather than at the time of publication. This ensures the easy access and high accuracy of meta-data collected. Experiments are programmatically built in the MiMiR database from the submitted information and details are systematically curated and further annotated by a team of trained annotators using a new Curation and Annotation Tool. Clinical information can be annotated and coded with a clinical Data Mapping Tool within an appropriate ethical framework. Users can visualise experimental annotation, assess data quality, download and share data via a web-based experiment browser called MiMiR Online. All requests to access data in MiMiR are routed through a sophisticated middleware security layer thereby allowing secure data access and sharing amongst MiMiR registered users prior to publication. Data in MiMiR can be mined and analysed using the integrated EMAAS open source analysis web portal or via export of data and meta-data into Rosetta Resolver data analysis package. The new MiMiR suite of software enables systematic and effective capture of extensive experimental and clinical information with the highest MIAME score, and secure data sharing prior to publication. MiMiR currently contains more than 150 experiments corresponding to over 3000 hybridisations and supports the Microarray Centre's large microarray user community and two international consortia. The MiMiR flexible and scalable hardware and software architecture enables secure warehousing of thousands of datasets, including clinical studies, from microarray and potentially other -omics technologies.

  6. Job-housing imbalance and commuting of coastal industrial town in Liaoning province, China

    NASA Astrophysics Data System (ADS)

    Chen, Fei; Lu, Wei; Cai, Jun; Yang, Qiyao

    2017-11-01

    During the Twelve Five period, China promulgated industrial policies promote the energy-intensive industries relocate to coastal areas in order to utilize marine shipping resources. Consequently, some major state-owned steel and petrochemical enterprises have relocated and resulted in a large scale coastal area development. Restricted by the port construction, most of the coastal industrial areas are located in the outer suburbs. To balance between employment and housing, new industrial coastal towns were constructed. In this paper, we adopt a case-study approach to analysis some typical industrial coastal towns of Liaoning Province situated in the Bohai Bay, which is currently under rapid economic growth. Our investigations reflect the common phenomenon of long distance commuting and massive amount of vacant residences. More specifically, large plant relocation caused hundreds of kilometers of daily commute and enterprises had to provide housing subsidies and education incentives to motivate employees to relocate to coastal areas. Nonetheless, many employees still refuse to relocate due to job stability, diverse needs of family members and access to convenient services. These employees averaged 4 hours of commute daily and some who lived further had to reside in temporary industrial housing units and subject to long-term family separation. As a result, only a small portion of employees purchase new coastal residences but mostly for investment and retirement purposes, leading to massive vacancy and ghost-town phenomenon. In contrast to the low demand, coastal areas tend to develop large amount of residences prior to industrial relocation, which may be directly related to local government finances. Some local governments have sold residential land to developers to general revenue to support the subsequent industrial development. Subject to the strong preference of ocean- view, residential housing developers tend to select coast-line land to construct new residential towns, which further reduces the access of marine resources for major industrial enterprises. This violates the original intent of developing industrial coastal towns and drastically limits the availability of marine resources. Lastly, we analyze the co-existence of over-exploiting residential areas and massive vacancies in reference to the demand and supply of land, as well as the demand of residential housing units with the choice criteria of enterprise employees.

  7. Impact analysis of off-road-vehicle use on vegetation in the Grand Mere dune environment. [Lake Michigan

    NASA Technical Reports Server (NTRS)

    Schultink, G. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. A linear regression between percent nonvegetative land and the time variable was completed for the two sample areas. Sample area no. 1 showed an average vegetation loss of 1.901% per year, while the loss for sample area no. 2 amounted to 5.889% per year. Two basic reasons for the difference were assumed to play a role: the difference in access potential and the amount of already fragmented vegetation complexes in existence during the first year of the comparative analysis - 1970. Sample area no. 2 was located closer to potential access points and was more fragmented initially.

  8. Broadband and scalable mobile satellite communication system for future access networks

    NASA Astrophysics Data System (ADS)

    Ohata, Kohei; Kobayashi, Kiyoshi; Nakahira, Katsuya; Ueba, Masazumi

    2005-07-01

    Due to the recent market trends, NTT has begun research into next generation satellite communication systems, such as broadband and scalable mobile communication systems. One service application objective is to provide broadband Internet access for transportation systems, temporal broadband access networks and telemetries to remote areas. While these are niche markets the total amount of capacity should be significant. We set a 1-Gb/s total transmission capacity as our goal. Our key concern is the system cost, which means that the system should be unified system with diversified services and not tailored for each application. As satellites account for a large portion of the total system cost, we set the target satellite size as a small, one-ton class dry mass with a 2-kW class payload power. In addition to the payload power and weight, the mobile satellite's frequency band is extremely limited. Therefore, we need to develop innovative technologies that will reduce the weight and maximize spectrum and power efficiency. Another challenge is the need for the system to handle up to 50 dB and a wide data rate range of other applications. This paper describes the key communication system technologies; the frequency reuse strategy, multiplexing scheme, resource allocation scheme, and QoS management algorithm to ensure excellent spectrum efficiency and support a variety of services and quality requirements in the mobile environment.

  9. A simple biosynthetic pathway for large product generation from small substrate amounts

    NASA Astrophysics Data System (ADS)

    Djordjevic, Marko; Djordjevic, Magdalena

    2012-10-01

    A recently emerging discipline of synthetic biology has the aim of constructing new biosynthetic pathways with useful biological functions. A major application of these pathways is generating a large amount of the desired product. However, toxicity due to the possible presence of toxic precursors is one of the main problems for such production. We consider here the problem of generating a large amount of product from a potentially toxic substrate. To address this, we propose a simple biosynthetic pathway, which can be induced in order to produce a large number of the product molecules, by keeping the substrate amount at low levels. Surprisingly, we show that the large product generation crucially depends on fast non-specific degradation of the substrate molecules. We derive an optimal induction strategy, which allows as much as three orders of magnitude increase in the product amount through biologically realistic parameter values. We point to a recently discovered bacterial immune system (CRISPR/Cas in E. coli) as a putative example of the pathway analysed here. We also argue that the scheme proposed here can be used not only as a stand-alone pathway, but also as a strategy to produce a large amount of the desired molecules with small perturbations of endogenous biosynthetic pathways.

  10. Student Use of Computers. Indicator of the Month.

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    Exposure to computers in school may help young people gain the computer literacy they will need to function effectively in society. The amount of access to computers at home is often directly related to socioeconomic factors. Examining the extent to which students have access to computers either at school or at home may help predict how prepared…

  11. ICT Oriented toward Nyaya: Community Computing in India's Slums

    ERIC Educational Resources Information Center

    Byker, Erik J.

    2014-01-01

    In many schools across India, access to information and communication technology (ICT) is still a rare privilege. While the Annual Status of Education Report in India (2013) showed a marginal uptick in the amount of computers, the opportunities for children to use those computers have remained stagnant. The lack of access to ICT is especially…

  12. An Ethical Dilemma: Talking about Plagiarism and Academic Integrity in the Digital Age

    ERIC Educational Resources Information Center

    Thomas, Ebony Elizabeth; Sassi, Kelly

    2011-01-01

    Today, many students not only access the Internet through desktop and laptop computers at home or at school but also have copious amounts of information at their fingertips via portable devices (e.g., iPods, iPads, netbooks, smartphones). While some teachers welcome the proliferation of portable technologies and easy wireless Internet access, and…

  13. 45 CFR 150.317 - Factors CMS uses to determine the amount of penalty.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Factors CMS uses to determine the amount of... RELATING TO HEALTH CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS CMS Enforcement With Respect to Issuers and Non-Federal Governmental Plans-Civil Money Penalties § 150.317 Factors CMS...

  14. Optimization of Materials and Interfaces for Spintronic Devices

    NASA Astrophysics Data System (ADS)

    Clark, Billy

    In recent years' Spintronic devices have drawn a significant amount of research attention. This interest comes in large part from their ability to enable interesting and new technology such as Spin Torque Transfer Random Access Memory or improve existing technology such as High Signal Read Heads for Hard Disk Drives. For the former we worked on optimizing and improving magnetic tunnel junctions by optimizing their thermal stability by using Ta insertion layers in the free layer. We further tried to simplify the design of the MTJ stack by attempting to replace the Co/Pd multilayer with CoPd alloy. In this dissertation, we detail its development and examine the switching characteristics. Lastly we look at a highly spin polarized material, Fe2MnGe, for optimizing Hard Drive Disk read heads.

  15. Development of an electronic radiation oncology patient information management system.

    PubMed

    Mandal, Abhijit; Asthana, Anupam Kumar; Aggarwal, Lalit Mohan

    2008-01-01

    The quality of patient care is critically influenced by the availability of accurate information and its efficient management. Radiation oncology consists of many information components, for example there may be information related to the patient (e.g., profile, disease site, stage, etc.), to people (radiation oncologists, radiological physicists, technologists, etc.), and to equipment (diagnostic, planning, treatment, etc.). These different data must be integrated. A comprehensive information management system is essential for efficient storage and retrieval of the enormous amounts of information. A radiation therapy patient information system (RTPIS) has been developed using open source software. PHP and JAVA script was used as the programming languages, MySQL as the database, and HTML and CSF as the design tool. This system utilizes typical web browsing technology using a WAMP5 server. Any user having a unique user ID and password can access this RTPIS. The user ID and password is issued separately to each individual according to the person's job responsibilities and accountability, so that users will be able to only access data that is related to their job responsibilities. With this system authentic users will be able to use a simple web browsing procedure to gain instant access. All types of users in the radiation oncology department should find it user-friendly. The maintenance of the system will not require large human resources or space. The file storage and retrieval process would be be satisfactory, unique, uniform, and easily accessible with adequate data protection. There will be very little possibility of unauthorized handling with this system. There will also be minimal risk of loss or accidental destruction of information.

  16. Context-based electronic health record: toward patient specific healthcare.

    PubMed

    Hsu, William; Taira, Ricky K; El-Saden, Suzie; Kangarloo, Hooshang; Bui, Alex A T

    2012-03-01

    Due to the increasingly data-intensive clinical environment, physicians now have unprecedented access to detailed clinical information from a multitude of sources. However, applying this information to guide medical decisions for a specific patient case remains challenging. One issue is related to presenting information to the practitioner: displaying a large (irrelevant) amount of information often leads to information overload. Next-generation interfaces for the electronic health record (EHR) should not only make patient data easily searchable and accessible, but also synthesize fragments of evidence documented in the entire record to understand the etiology of a disease and its clinical manifestation in individual patients. In this paper, we describe our efforts toward creating a context-based EHR, which employs biomedical ontologies and (graphical) disease models as sources of domain knowledge to identify relevant parts of the record to display. We hypothesize that knowledge (e.g., variables, relationships) from these sources can be used to standardize, annotate, and contextualize information from the patient record, improving access to relevant parts of the record and informing medical decision making. To achieve this goal, we describe a framework that aggregates and extracts findings and attributes from free-text clinical reports, maps findings to concepts in available knowledge sources, and generates a tailored presentation of the record based on the information needs of the user. We have implemented this framework in a system called Adaptive EHR, demonstrating its capabilities to present and synthesize information from neurooncology patients. This paper highlights the challenges and potential applications of leveraging disease models to improve the access, integration, and interpretation of clinical patient data. © 2012 IEEE

  17. A Pilot Examination of the Methods Used to Counteract Insider Threat Security Risks Associated with the Use of Radioactive Materials in the Research and Clinical Setting.

    PubMed

    Tsenov, B G; Emery, R J; Whitehead, L W; Gonzalez, J Reingle; Gemeinhardt, G L

    2018-03-01

    While many organizations maintain multiple layers of security control methodologies to prevent outsiders from gaining unauthorized access, persons such as employees or contractors who have been granted legitimate access can represent an "insider threat" risk. Interestingly, some of the most notable radiological events involving the purposeful contamination or exposure of individuals appear to have been perpetrated by insiders. In the academic and medical settings, radiation safety professionals focus their security efforts on (1) ensuring controls are in place to prevent unauthorized access or removal of sources, and (2) increasing security controls for the unescorted accessing of large sources of radioactivity (known as "quantities of concern"). But these controls may not completely address the threat insiders represent when radioactive materials below these quantities are present. The goal of this research project was to characterize the methodologies currently employed to counteract the insider security threat for the misuse or purposeful divergence of radioactive materials used in the academic and medical settings. A web-based survey was used to assess how practicing radiation safety professionals in academic and medical settings anticipate, evaluate, and control insider threat security risks within their institutions. While all respondents indicated that radioactive sources are being used in amounts below quantities of concern, only 6 % consider insider threat security issues as part of the protocol review for the use of general radioactive materials. The results of this survey identify several opportunities for improvement for institutions to address security gaps.

  18. Recent Developments in Toxico-Cheminformatics: A New ...

    EPA Pesticide Factsheets

    Efforts to improve public access to chemical toxicity information resources, coupled with new high-throughput screening (HTS) data and efforts to systematize legacy toxicity studies, have the potential to significantly improve predictive capabilities in toxicology. Important recent developments include: 1) large and growing public resources that link chemical structures to biological activity and toxicity data in searchable format, and that offer more nuanced and varied representations of activity; 2) standardized relational data models that capture relevant details of chemical treatment and effects of published in vivo experiments; and 3) the generation of large amounts of new data from public efforts that are employing HTS technologies to probe a wide range of bioactivity and cellular processes across large swaths of chemical space. Most recently, EPA’s DSSTox project has published several new EPA chemical data inventories (IRIS, HPV, ToxCast) and added an on-line capability for structure (substructure or similarity)-searching through all or parts of the published DSSTox data files. These efforts are, for the first time in many cases, opening up a structure-paved two-way highway between previously inaccessible or isolated public chemical data repositories and large public resources, such as PubChem. In addition, public initiatives (such as ToxML) are developing systematized data models of toxicity study areas, and introducing standardized templates, contr

  19. Effect of beach management policies on recreational water quality.

    PubMed

    Kelly, Elizabeth A; Feng, Zhixuan; Gidley, Maribeth L; Sinigalliano, Christopher D; Kumar, Naresh; Donahue, Allison G; Reniers, Adrianus J H M; Solo-Gabriele, Helena M

    2018-04-15

    When beach water monitoring programs identify poor water quality, the causes are frequently unknown. We hypothesize that management policies play an important role in the frequency of fecal indicator bacteria (FIB) exceedances (enterococci and fecal coliform) at recreational beaches. To test this hypothesis we implemented an innovative approach utilizing large amounts of monitoring data (n > 150,000 measurements per FIB) to determine associations between the frequency of contaminant exceedances and beach management practices. The large FIB database was augmented with results from a survey designed to assess management policies for 316 beaches throughout the state of Florida. The FIB and survey data were analyzed using t-tests, ANOVA, factor analysis, and linear regression. Results show that beach geomorphology (beach type) was highly associated with exceedance of regulatory standards. Low enterococci exceedances were associated with open coast beaches (n = 211) that have sparse human densities, no homeless populations, low densities of dogs and birds, bird management policies, low densities of seaweed, beach renourishment, charge access fees, employ lifeguards, without nearby marinas, and those that manage storm water. Factor analysis and a linear regression confirmed beach type as the predominant factor with secondary influences from grooming activities (including seaweed densities and beach renourishment) and beach access (including charging fees, employing lifeguards, and without nearby marinas). Our results were observable primarily because of the very large public FIB database available for analyses; similar approaches can be adopted at other beaches. The findings of this research have important policy implications because the selected beach management practices that were associated with low levels of FIB can be implemented in other parts of the US and around the world to improve recreational beach water quality. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. The Transcriptome Analysis and Comparison Explorer--T-ACE: a platform-independent, graphical tool to process large RNAseq datasets of non-model organisms.

    PubMed

    Philipp, E E R; Kraemer, L; Mountfort, D; Schilhabel, M; Schreiber, S; Rosenstiel, P

    2012-03-15

    Next generation sequencing (NGS) technologies allow a rapid and cost-effective compilation of large RNA sequence datasets in model and non-model organisms. However, the storage and analysis of transcriptome information from different NGS platforms is still a significant bottleneck, leading to a delay in data dissemination and subsequent biological understanding. Especially database interfaces with transcriptome analysis modules going beyond mere read counts are missing. Here, we present the Transcriptome Analysis and Comparison Explorer (T-ACE), a tool designed for the organization and analysis of large sequence datasets, and especially suited for transcriptome projects of non-model organisms with little or no a priori sequence information. T-ACE offers a TCL-based interface, which accesses a PostgreSQL database via a php-script. Within T-ACE, information belonging to single sequences or contigs, such as annotation or read coverage, is linked to the respective sequence and immediately accessible. Sequences and assigned information can be searched via keyword- or BLAST-search. Additionally, T-ACE provides within and between transcriptome analysis modules on the level of expression, GO terms, KEGG pathways and protein domains. Results are visualized and can be easily exported for external analysis. We developed T-ACE for laboratory environments, which have only a limited amount of bioinformatics support, and for collaborative projects in which different partners work on the same dataset from different locations or platforms (Windows/Linux/MacOS). For laboratories with some experience in bioinformatics and programming, the low complexity of the database structure and open-source code provides a framework that can be customized according to the different needs of the user and transcriptome project.

  1. PATHA: Performance Analysis Tool for HPC Applications

    DOE PAGES

    Yoo, Wucherl; Koo, Michelle; Cao, Yi; ...

    2016-02-18

    Large science projects rely on complex workflows to analyze terabytes or petabytes of data. These jobs are often running over thousands of CPU cores and simultaneously performing data accesses, data movements, and computation. It is difficult to identify bottlenecks or to debug the performance issues in these large workflows. In order to address these challenges, we have developed Performance Analysis Tool for HPC Applications (PATHA) using the state-of-art open source big data processing tools. Our framework can ingest system logs to extract key performance measures, and apply the most sophisticated statistical tools and data mining methods on the performance data.more » Furthermore, it utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of PATHA, we conduct a case study on the workflows from an astronomy project known as the Palomar Transient Factory (PTF). This study processed 1.6 TB of system logs collected on the NERSC supercomputer Edison. Using PATHA, we were able to identify performance bottlenecks, which reside in three tasks of PTF workflow with the dependency on the density of celestial objects.« less

  2. Lightweight Innovative Solar Array (LISA): Providing Higher Power to Small Spacecraft

    NASA Technical Reports Server (NTRS)

    Johnson, Les; Carr, John; Fabisinski, Leo; Russell,Tiffany; Smith, Leigh

    2015-01-01

    Affordable and convenient access to electrical power is essential for all spacecraft and is a critical design driver for the next generation of smallsats, including cubesats, which are currently extremely power limited. The Lightweight Innovative Solar Array (LISA), a concept designed, prototyped, and tested at the NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama provides an affordable, lightweight, scalable, and easily manufactured approach for power generation in space. This flexible technology has many wide-ranging applications from serving small satellites to providing abundant power to large spacecraft in GEO and beyond. By using very thin, ultra-flexible solar arrays adhered to an inflatable structure, a large area (and thus large amount of power) can be folded and packaged into a relatively small volume. The LISA array comprises a launch-stowed, orbit-deployed structure on which lightweight photovoltaic devices and, potentially, transceiver elements are embedded. The system will provide a 2.5 to 5 fold increase in specific power generation (Watts/kilogram) coupled with a >2x enhancement of stowed volume (Watts/cubic-meter) and a decrease in cost (dollars/Watt) when compared to state-of-the-art solar arrays.

  3. Toxico-Cheminformatics: New and Expanding Public ...

    EPA Pesticide Factsheets

    High-throughput screening (HTS) technologies, along with efforts to improve public access to chemical toxicity information resources and to systematize older toxicity studies, have the potential to significantly improve information gathering efforts for chemical assessments and predictive capabilities in toxicology. Important developments include: 1) large and growing public resources that link chemical structures to biological activity and toxicity data in searchable format, and that offer more nuanced and varied representations of activity; 2) standardized relational data models that capture relevant details of chemical treatment and effects of published in vivo experiments; and 3) the generation of large amounts of new data from public efforts that are employing HTS technologies to probe a wide range of bioactivity and cellular processes across large swaths of chemical space. By annotating toxicity data with associated chemical structure information, these efforts link data across diverse study domains (e.g., ‘omics’, HTS, traditional toxicity studies), toxicity domains (carcinogenicity, developmental toxicity, neurotoxicity, immunotoxicity, etc) and database sources (EPA, FDA, NCI, DSSTox, PubChem, GEO, ArrayExpress, etc.). Public initiatives are developing systematized data models of toxicity study areas and introducing standardized templates, controlled vocabularies, hierarchical organization, and powerful relational searching capability across capt

  4. GEMINI: a computationally-efficient search engine for large gene expression datasets.

    PubMed

    DeFreitas, Timothy; Saddiki, Hachem; Flaherty, Patrick

    2016-02-24

    Low-cost DNA sequencing allows organizations to accumulate massive amounts of genomic data and use that data to answer a diverse range of research questions. Presently, users must search for relevant genomic data using a keyword, accession number of meta-data tag. However, in this search paradigm the form of the query - a text-based string - is mismatched with the form of the target - a genomic profile. To improve access to massive genomic data resources, we have developed a fast search engine, GEMINI, that uses a genomic profile as a query to search for similar genomic profiles. GEMINI implements a nearest-neighbor search algorithm using a vantage-point tree to store a database of n profiles and in certain circumstances achieves an [Formula: see text] expected query time in the limit. We tested GEMINI on breast and ovarian cancer gene expression data from The Cancer Genome Atlas project and show that it achieves a query time that scales as the logarithm of the number of records in practice on genomic data. In a database with 10(5) samples, GEMINI identifies the nearest neighbor in 0.05 sec compared to a brute force search time of 0.6 sec. GEMINI is a fast search engine that uses a query genomic profile to search for similar profiles in a very large genomic database. It enables users to identify similar profiles independent of sample label, data origin or other meta-data information.

  5. Promoting access to and use of seismic data in a large scientific community. SpaceInn data handling and archiving

    NASA Astrophysics Data System (ADS)

    Michel, Eric; Belkacem, Kevin; Samadi, Reza; Assis Peralta, Raphael de; Renié, Christian; Abed, Mahfoudh; Lin, Guangyuan; Christensen-Dalsgaard, Jørgen; Houdek, Günter; Handberg, Rasmus; Gizon, Laurent; Burston, Raymond; Nagashima, Kaori; Pallé, Pere; Poretti, Ennio; Rainer, Monica; Mistò, Angelo; Panzera, Maria Rosa; Roth, Markus

    2017-10-01

    The growing amount of seismic data available from space missions (SOHO, CoRoT, Kepler, SDO,…) but also from ground-based facilities (GONG, BiSON, ground-based large programmes…), stellar modelling and numerical simulations, creates new scientific perspectives such as characterizing stellar populations in our Galaxy or planetary systems by providing model-independent global properties of stars such as mass, radius, and surface gravity within several percent accuracy, as well as constraints on the age. These applications address a broad scientific community beyond the solar and stellar one and require combining indices elaborated with data from different databases (e.g. seismic archives and ground-based spectroscopic surveys). It is thus a basic requirement to develop a simple and effcient access to these various data resources and dedicated tools. In the framework of the European project SpaceInn (FP7), several data sources have been developed or upgraded. The Seismic Plus Portal has been developed, where synthetic descriptions of the most relevant existing data sources can be found, as well as tools allowing to localize existing data for given objects or period and helping the data query. This project has been developed within the Virtual Observatory (VO) framework. In this paper, we give a review of the various facilities and tools developed within this programme. The SpaceInn project (Exploitation of Space Data for Innovative Helio- and Asteroseismology) has been initiated by the European Helio- and Asteroseismology Network (HELAS).

  6. Autonomous telemetry system by using mobile networks for a long-term seismic observation

    NASA Astrophysics Data System (ADS)

    Hirahara, S.; Uchida, N.; Nakajima, J.

    2012-04-01

    When a large earthquake occurs, it is important to know the detailed distribution of aftershocks immediately after the main shock for the estimation of the fault plane. The large amount of seismic data is also required to determine the three-dimensional seismic velocity structure around the focal area. We have developed an autonomous telemetry system using mobile networks, which is specialized for aftershock observations. Because the newly developed system enables a quick installation and real-time data transmission by using mobile networks, we can construct a dense online seismic network even in mountain areas where conventional wired networks are not available. This system is equipped with solar panels that charge lead-acid battery, and enables a long-term seismic observation without maintenance. Furthermore, this system enables a continuous observation at low costs with flat-rate or prepaid Internet access. We have tried to expand coverage areas of mobile communication and back up Internet access by configuring plural mobile carriers. A micro server embedded with Linux consists of automatic control programs of the Internet connection and data transmission. A status monitoring and remote maintenance are available via the Internet. In case of a communication failure, an internal storage can back up data for two years. The power consumption of communication device ranges from 2.5 to 4.0 W. With a 50 Ah lead-acid battery, this system continues to record data for four days if the battery charging by solar panels is temporarily unavailable.

  7. Uncertainty In Greenhouse Gas Emissions On Carbon Sequestration In Coastal and Freshwater Wetlands of the Mississippi River Delta: A Subsiding Coastline as a Proxy for Future Global Sea Level

    NASA Astrophysics Data System (ADS)

    White, J. R.; DeLaune, R. D.; Roy, E. D.; Corstanje, R.

    2014-12-01

    The highly visible phenomenon of wetland loss in coastal Louisiana (LA) is examined through the prism of carbon accumulation, wetland loss and greenhouse gas (GHG) emissions. The Mississippi River Deltaic region experiences higher relative sea level rise due to coupled subsidence and eustatic sea level rise allowing this region to serve as a proxy for future projected golbal sea level rise. Carbon storage or sequestration in rapidly subsiding LA coastal marsh soils is based on vertical marsh accretion and areal change data. While coastal marshes sequester significant amount of carbon through vertical accretion, large amounts of carbon, previously sequested in the soil profile is lost through annual deterioration of these coastal marshes as well as through GHG emissions. Efforts are underway in Louisiana to access the carbon credit market in order to provide significant funding for coastal restoration projects. However, there is very large uncertainty on GHG emission rates related to both marsh type and temporal (daily and seasonal) effects. Very little data currently exists which addresses this uncertainty which can significantly affect the carbon credit value of a particular wetland system. We provide an analysis of GHG emission rates for coastal freshwater, brackish and and salt marshes compared to the net soil carbon sequestration rate. Results demonstrate that there is very high uncertainty on GHG emissions which can substantially alter the carbon credit value of a particular wetland system.

  8. Virtual interface substructure synthesis method for normal mode analysis of super-large molecular complexes at atomic resolution.

    PubMed

    Chen, Xuehui; Sun, Yunxiang; An, Xiongbo; Ming, Dengming

    2011-10-14

    Normal mode analysis of large biomolecular complexes at atomic resolution remains challenging in computational structure biology due to the requirement of large amount of memory space and central processing unit time. In this paper, we present a method called virtual interface substructure synthesis method or VISSM to calculate approximate normal modes of large biomolecular complexes at atomic resolution. VISSM introduces the subunit interfaces as independent substructures that join contacting molecules so as to keep the integrity of the system. Compared with other approximate methods, VISSM delivers atomic modes with no need of a coarse-graining-then-projection procedure. The method was examined for 54 protein-complexes with the conventional all-atom normal mode analysis using CHARMM simulation program and the overlap of the first 100 low-frequency modes is greater than 0.7 for 49 complexes, indicating its accuracy and reliability. We then applied VISSM to the satellite panicum mosaic virus (SPMV, 78,300 atoms) and to F-actin filament structures of up to 39-mer, 228,813 atoms and found that VISSM calculations capture functionally important conformational changes accessible to these structures at atomic resolution. Our results support the idea that the dynamics of a large biomolecular complex might be understood based on the motions of its component subunits and the way in which subunits bind one another. © 2011 American Institute of Physics

  9. Genetic variation in jasmonic acid- and spider mite-induced plant volatile emission of cucumber accessions and attraction of the predator Phytoseiulus persimilis.

    PubMed

    Kappers, Iris F; Verstappen, Francel W A; Luckerhoff, Ludo L P; Bouwmeester, Harro J; Dicke, Marcel

    2010-05-01

    Cucumber plants (Cucumis sativus L.) respond to spider-mite (Tetranychus urticae) damage with the release of specific volatiles that are exploited by predatory mites, the natural enemies of the spider mites, to locate their prey. The production of volatiles also can be induced by exposing plants to the plant hormone jasmonic acid. We analyzed volatile emissions from 15 cucumber accessions upon herbivory by spider mites and upon exposure to jasmonic acid using gas chromatography-mass spectrometry. Upon induction, cucumber plants emitted over 24 different compounds, and the blend of induced volatiles consisted predominantly of terpenoids. The total amount of volatiles was higher in plants treated with jasmonic acid than in those infested with spider mites, with (E)-4,8-dimethyl-1,3,7-nonatriene, (E,E)-alpha-farnesene, and (E)-beta-ocimene as the most abundant compounds in all accessions in both treatments. Significant variation among the accessions was found for the 24 major volatile compounds. The accessions differed strongly in total amount of volatiles emitted, and displayed very different odor profiles. Principal component analysis performed on the relative quantities of particular compounds within the blend revealed clusters of highly correlated volatiles, which is suggestive of common metabolic pathways. A number of cucumber accessions also were tested for their attractiveness to Phytoseiulus persimilis, a specialist predator of spider mites. Differences in the attraction of predatory mites by the various accessions correlated to differences in the individual chemical profiles of these accessions. The presence of genetic variation in induced plant volatile emission in cucumber shows that it is possible to breed for cucumber varieties that are more attractive to predatory mites and other biological control agents.

  10. Genetic Variation in Jasmonic Acid- and Spider Mite-Induced Plant Volatile Emission of Cucumber Accessions and Attraction of the Predator Phytoseiulus persimilis

    PubMed Central

    Verstappen, Francel W. A.; Luckerhoff, Ludo L. P.; Bouwmeester, Harro J.; Dicke, Marcel

    2010-01-01

    Cucumber plants (Cucumis sativus L.) respond to spider–mite (Tetranychus urticae) damage with the release of specific volatiles that are exploited by predatory mites, the natural enemies of the spider mites, to locate their prey. The production of volatiles also can be induced by exposing plants to the plant hormone jasmonic acid. We analyzed volatile emissions from 15 cucumber accessions upon herbivory by spider mites and upon exposure to jasmonic acid using gas chromatography—mass spectrometry. Upon induction, cucumber plants emitted over 24 different compounds, and the blend of induced volatiles consisted predominantly of terpenoids. The total amount of volatiles was higher in plants treated with jasmonic acid than in those infested with spider mites, with (E)-4,8-dimethyl-1,3,7-nonatriene, (E,E)-α-farnesene, and (E)-β-ocimene as the most abundant compounds in all accessions in both treatments. Significant variation among the accessions was found for the 24 major volatile compounds. The accessions differed strongly in total amount of volatiles emitted, and displayed very different odor profiles. Principal component analysis performed on the relative quantities of particular compounds within the blend revealed clusters of highly correlated volatiles, which is suggestive of common metabolic pathways. A number of cucumber accessions also were tested for their attractiveness to Phytoseiulus persimilis, a specialist predator of spider mites. Differences in the attraction of predatory mites by the various accessions correlated to differences in the individual chemical profiles of these accessions. The presence of genetic variation in induced plant volatile emission in cucumber shows that it is possible to breed for cucumber varieties that are more attractive to predatory mites and other biological control agents. PMID:20383796

  11. An exploratory study into the effect of time-restricted internet access on face-validity, construct validity and reliability of postgraduate knowledge progress testing

    PubMed Central

    2013-01-01

    Background Yearly formative knowledge testing (also known as progress testing) was shown to have a limited construct-validity and reliability in postgraduate medical education. One way to improve construct-validity and reliability is to improve the authenticity of a test. As easily accessible internet has become inseparably linked to daily clinical practice, we hypothesized that allowing internet access for a limited amount of time during the progress test would improve the perception of authenticity (face-validity) of the test, which would in turn improve the construct-validity and reliability of postgraduate progress testing. Methods Postgraduate trainees taking the yearly knowledge progress test were asked to participate in a study where they could access the internet for 30 minutes at the end of a traditional pen and paper test. Before and after the test they were asked to complete a short questionnaire regarding the face-validity of the test. Results Mean test scores increased significantly for all training years. Trainees indicated that the face-validity of the test improved with internet access and that they would like to continue to have internet access during future testing. Internet access did not improve the construct-validity or reliability of the test. Conclusion Improving the face-validity of postgraduate progress testing, by adding the possibility to search the internet for a limited amount of time, positively influences test performance and face-validity. However, it did not change the reliability or the construct-validity of the test. PMID:24195696

  12. A Study of Four Public Libraries in Northeastern Ohio To Determine the Restriction of Access of Materials to Children.

    ERIC Educational Resources Information Center

    Wright, Gretchen McHenry

    This study of four public libraries in Northeastern Ohio was conducted to determine the amount, if any, of restriction of access of materials to minors. Four libraries were opportunistically selected, and the director and children's librarian in each were interviewed. The emerging hypothesis was that some restriction would occur. The written…

  13. An Investigation into Web Content Accessibility Guideline Conformance for an Aging Population

    ERIC Educational Resources Information Center

    Curran, Kevin; Robinson, David

    2007-01-01

    Poor web site design can cause difficulties for specific groups of users. By applying the Web Content Accessibility Guidelines to a web site, the amount of possible users who can successfully view the content of that site will increase, especially for those who are in the disabled and older adult categories of online users. Older adults are coming…

  14. 41 CFR 301-70.804 - What amount must the Government be reimbursed for travel on a Government aircraft?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... no access to regularly scheduled commercial airline service. (c) For political travel on a Government... Government be reimbursed for travel on a Government aircraft? 301-70.804 Section 301-70.804 Public Contracts... Travel on Government Aircraft § 301-70.804 What amount must the Government be reimbursed for travel on a...

  15. 41 CFR 301-70.804 - What amount must the Government be reimbursed for travel on a Government aircraft?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... no access to regularly scheduled commercial airline service. (c) For political travel on a Government... Government be reimbursed for travel on a Government aircraft? 301-70.804 Section 301-70.804 Public Contracts... Travel on Government Aircraft § 301-70.804 What amount must the Government be reimbursed for travel on a...

  16. Community-aware charging station network design for electrified vehicles in urban areas : reducing congestion, emissions, improving accessibility, and promoting walking, bicycling, and use of public transportation.

    DOT National Transportation Integrated Search

    2016-08-31

    A major challenge for achieving large-scale adoption of EVs is an accessible infrastructure for the communities. The societal benefits of large-scale adoption of EVs cannot be realized without adequate deployment of publicly accessible charging stati...

  17. Use of standard hypodermic needles for accessing laparoscopic adjustable gastric band ports.

    PubMed

    Bewsher, Samuel Mark; Azzi, Anthony; Wright, Timothy

    2010-06-01

    Laparoscopic adjustable gastric banding is a common and successful method of surgically treating morbid obesity. A recipient will have to attend their surgeon's rooms a number of times to optimally adjust the amount of fluid in the band and hence the amount of restriction. Manufacturers suggest that the ports should be accessed with special non-coring needles that may not always be available in regional or remote centers, and this could create a safety risk in cases where urgent band deflation is required. Ports of two different brands were repeatedly accessed over 100 times in the same location while containing fluid under pressure, using a standard beveled 21 g hypodermic needle (SN) and a 20 g Huber tipped non-coring needle (NCN). The path the needles types took through the port septum was also examined. There was no leakage of fluid from any of the ports tested. Neither SN nor NCN passed through the port septum down their axis, but rather in a direction closer to that of their beveled surface. There is no more risk of "coring" the septum with a SN than with a NCN. SN can be used safely and routinely to access laparoscopic adjustable gastric band ports.

  18. University of TX Bureau of Economic Geology's Core Research Centers: The Time is Right for Registering Physical Samples and Assigning IGSN's - Workflows, Stumbling Blocks, and Successes.

    NASA Astrophysics Data System (ADS)

    Averett, A.; DeJarnett, B. B.

    2016-12-01

    The University Of Texas Bureau Of Economic Geology (BEG) serves as the geological survey for Texas and operates three geological sample repositories that house well over 2 million boxes of geological samples (cores and cuttings) and an abundant amount of geoscience data (geophysical logs, thin sections, geochemical analyses, etc.). Material is accessible and searchable online, and it is publically available to the geological community for research and education. Patrons access information about our collection by using our online core and log database (SQL format). BEG is currently undertaking a large project to: 1) improve the internal accuracy of metadata associated with the collection; 2) enhance the capabilities of the database for both BEG curators and researchers as well as our external patrons; and 3) ensure easy and efficient navigation for patrons through our online portal. As BEG undertakes this project, BEG is in the early stages of planning to export the metadata for its collection into SESAR (System for Earth Sample Registration) and have IGSN's (International GeoSample Numbers) assigned to its samples. Education regarding the value of IGSN's and an external registry (SESAR) has been crucial to receiving management support for the project because the concept and potential benefits of registering samples in a registry outside of the institution were not well-known prior to this project. Potential benefits such as increases in discoverability, repository recognition in publications, and interoperability were presented. The project was well-received by management, and BEG fully supports the effort to register our physical samples with SESAR. Since BEG is only in the initial phase of this project, any stumbling blocks, workflow issues, successes/failures, etc. can only be predicted at this point, but by mid-December, BEG expects to have several concrete issues to present in the session. Currently, our most pressing issue involves establishing the most efficient workflow for exporting of large amounts of metadata in a format that SESAR can easily ingest, and how this can be best accomplished with very few BEG staff assigned to the project.

  19. 2014 Mount Ontake eruption: characteristics of the phreatic eruption as inferred from aerial observations

    NASA Astrophysics Data System (ADS)

    Kaneko, Takayuki; Maeno, Fukashi; Nakada, Setsuya

    2016-05-01

    The sudden eruption of Mount Ontake on September 27, 2014, led to a tragedy that caused more than 60 fatalities including missing persons. In order to mitigate the potential risks posed by similar volcano-related disasters, it is vital to have a clear understanding of the activity status and progression of eruptions. Because the erupted material was largely disturbed while access was strictly prohibited for a month, we analyzed the aerial photographs taken on September 28. The results showed that there were three large vents in the bottom of the Jigokudani valley on September 28. The vent in the center was considered to have been the main vent involved in the eruption, and the vents on either side were considered to have been formed by non-explosive processes. The pyroclastic flows extended approximately 2.5 km along the valley at an average speed of 32 km/h. The absence of burned or fallen trees in this area indicated that the temperatures and destructive forces associated with the pyroclastic flow were both low. The distribution of ballistics was categorized into four zones based on the number of impact craters per unit area, and the furthest impact crater was located 950 m from the vents. Based on ballistic models, the maximum initial velocity of the ejecta was estimated to be 111 m/s. Just after the beginning of the eruption, very few ballistic ejecta had arrived at the summit, even though the eruption plume had risen above the summit, which suggested that a large amount of ballistic ejecta was expelled from the volcano several tens-of-seconds after the beginning of the eruption. This initial period was characterized by the escape of a vapor phase from the vents, which then caused the explosive eruption phase that generated large amounts of ballistic ejecta via sudden decompression of a hydrothermal reservoir.

  20. Programming Wireless Handheld Devices for Applications in Teaching Astronomy

    NASA Astrophysics Data System (ADS)

    Budiardja, R.; Saranathan, V.; Guidry, M.

    2002-12-01

    Wireless technology implemented with handheld devices has attractive features because of the potential to access large amounts of data and the prospect of on-the-fly computational analysis from a device that can be carried in a shirt pocket. We shall describe applications of such technology to the general paradigm of making digital wireless connections from the field to upload information and queries to network servers, executing (potentially complex) data analysis and/or database operations on fast network computers, and returning real-time information from this analysis to the handheld device in the field. As illustration, we shall describe several client/server programs that we have written for applications in teaching introductory astronomy. For example, one program allows static and dynamic properties of astronomical objects to be accessed in a remote observation laboratory setting using a digital cell phone or PDA. Another implements interactive quizzing over a cell phone or PDA using a 700-question introductory astronomy quiz database, thus permitting students to study for astronomy quizzes in any environment in which they have a few free minutes and a digital cell phone or wireless PDA. The presentation will include hands-on demonstrations with real devices.

  1. Sons as sole caregivers for their elderly parents. How do they cope?

    PubMed Central

    Thompson, B.; Tudiver, F.; Manson, J.

    2000-01-01

    OBJECTIVE: To examine the experiences of men who are sole caregivers for their elderly parents. DESIGN: Semistructured in-depth interviews. SETTING: Family practice clinic attached to a large tertiary care centre in north central Toronto. PARTICIPANTS: A convenience sample of 10 men who identified themselves as sole caregivers in that they had no particular women assisting them with caregiving. METHOD: Interviews were analyzed by standard qualitative methods. MAIN FINDINGS: Emerging themes were the spectrum of caregiving, the experience of caregiving, and the use of formal support systems. Scope of care varied from very little to total care, including personal care. Participants described positive and negative aspects of and the nature of their relationships with those for whom they cared. Avoiding institutionalization was seen as positive; effects on work and social life were negative. Use of more than homemaking services was associated with previous hospitalization; participants complained about difficulties accessing services. CONCLUSIONS: The nature of sons' relationships with their parents and the amount of time they have available can predict how much caregiving they can undertake. Information about community support services is not readily accessible to these men. PMID:10690492

  2. The relationship between Nairobi adolescents' media use and their sexual beliefs and attitudes.

    PubMed

    Miller, Ann Neville; Kinnally, William; Maleche, Hellen; Booker, Nancy Achieng'

    2017-07-01

    Adolescents in sub-Saharan Africa are at risk for contracting HIV. Although media campaigns have educated the population as a whole, few studies are available about the time sub-Saharan African youth spend listening to and viewing sexual messages via the entertainment and informational media. The goals of this project were: 1) to investigate what programming Nairobi adolescents access; and 2) to investigate the association between frequency of access and level of focus on physical relationships with adolescents' perceptions of descriptive norms of peer sexual behaviour, and their attitudes regarding men as sex driven, women as sex objects, and dating as a sport. A total of 464 students from 6 Nairobi secondary schools were surveyed. When students' favourite musicians had a strong focus on physical relationships in their songs, those students estimated the prevalence of risky sexual behaviours among their peers higher. These students also endorsed gender stereotypical and casual attitudes about sex. Large amounts of time spend on the Internet was predictive of all sexual attitude variables. Students whose favourite TV programmes had a strong focus on physical relationships also estimated prevalence of peer sexual behaviour as high.

  3. MAPI: a software framework for distributed biomedical applications

    PubMed Central

    2013-01-01

    Background The amount of web-based resources (databases, tools etc.) in biomedicine has increased, but the integrated usage of those resources is complex due to differences in access protocols and data formats. However, distributed data processing is becoming inevitable in several domains, in particular in biomedicine, where researchers face rapidly increasing data sizes. This big data is difficult to process locally because of the large processing, memory and storage capacity required. Results This manuscript describes a framework, called MAPI, which provides a uniform representation of resources available over the Internet, in particular for Web Services. The framework enhances their interoperability and collaborative use by enabling a uniform and remote access. The framework functionality is organized in modules that can be combined and configured in different ways to fulfil concrete development requirements. Conclusions The framework has been tested in the biomedical application domain where it has been a base for developing several clients that are able to integrate different web resources. The MAPI binaries and documentation are freely available at http://www.bitlab-es.com/mapi under the Creative Commons Attribution-No Derivative Works 2.5 Spain License. The MAPI source code is available by request (GPL v3 license). PMID:23311574

  4. Designing Citizen Science Projects in the Era of Mega-Information and Connected Activism

    NASA Astrophysics Data System (ADS)

    Pompea, S. M.

    2010-12-01

    The design of citizen science projects must take many factors into account in order to be successful. Currently, there are a wide variety of citizen science projects with different aims, audiences, reporting methods, and degrees of scientific rigor and usefulness. Projects function on local, national, and worldwide scales and range in time from limited campaigns to around the clock projects. For current and future projects, advanced cell phones and mobile computing allow an unprecedented degree of connectivity and data transfer. These advances will greatly influence the design of citizen science projects. An unprecedented amount of data is available for data mining by interested citizen scientists; how can projects take advantage of this? Finally, a variety of citizen scientist projects have social activism and change as part of their mission and goals. How can this be harnessed in a constructive and efficient way? The design of projects must also select the proper role for experts and novices, provide quality control, and must motivate users to encourage long-term involvement. Effective educational and instructional materials design can be used to design responsive and effective projects in a more highly connected age with access to very large amounts of information.

  5. Mineral resources: Reserves, peak production and the future

    USGS Publications Warehouse

    Meinert, Lawrence D.; Robinson, Gilpin; Nassar, Nedal

    2016-01-01

    The adequacy of mineral resources in light of population growth and rising standards of living has been a concern since the time of Malthus (1798), but many studies erroneously forecast impending peak production or exhaustion because they confuse reserves with “all there is”. Reserves are formally defined as a subset of resources, and even current and potential resources are only a small subset of “all there is”. Peak production or exhaustion cannot be modeled accurately from reserves. Using copper as an example, identified resources are twice as large as the amount projected to be needed through 2050. Estimates of yet-to-be discovered copper resources are up to 40-times more than currently-identified resources, amounts that could last for many centuries. Thus, forecasts of imminent peak production due to resource exhaustion in the next 20–30 years are not valid. Short-term supply problems may arise, however, and supply-chain disruptions are possible at any time due to natural disasters (earthquakes, tsunamis, hurricanes) or political complications. Needed to resolve these problems are education and exploration technology development, access to prospective terrain, better recycling and better accounting of externalities associated with production (pollution, loss of ecosystem services and water and energy use).

  6. Enhanced visual perception through tone mapping

    NASA Astrophysics Data System (ADS)

    Harrison, Andre; Mullins, Linda L.; Raglin, Adrienne; Etienne-Cummings, Ralph

    2016-05-01

    Tone mapping operators compress high dynamic range images to improve the picture quality on a digital display when the dynamic range of the display is lower than that of the image. However, tone mapping operators have been largely designed and evaluated based on the aesthetic quality of the resulting displayed image or how perceptually similar the compressed image appears relative to the original scene. They also often require per image tuning of parameters depending on the content of the image. In military operations, however, the amount of information that can be perceived is more important than the aesthetic quality of the image and any parameter adjustment needs to be as automated as possible regardless of the content of the image. We have conducted two studies to evaluate the perceivable detail of a set of tone mapping algorithms, and we apply our findings to develop and test an automated tone mapping algorithm that demonstrates a consistent improvement in the amount of perceived detail. An automated, and thereby predictable, tone mapping method enables a consistent presentation of perceivable features, can reduce the bandwidth required to transmit the imagery, and can improve the accessibility of the data by reducing the needed expertise of the analyst(s) viewing the imagery.

  7. Event-Based User Classification in Weibo Media

    PubMed Central

    Wang, Wendong; Cheng, Shiduan; Que, Xirong

    2014-01-01

    Weibo media, known as the real-time microblogging services, has attracted massive attention and support from social network users. Weibo platform offers an opportunity for people to access information and changes the way people acquire and disseminate information significantly. Meanwhile, it enables people to respond to the social events in a more convenient way. Much of the information in Weibo media is related to some events. Users who post different contents, and exert different behavior or attitude may lead to different contribution to the specific event. Therefore, classifying the large amount of uncategorized social circles generated in Weibo media automatically from the perspective of events has been a promising task. Under this circumstance, in order to effectively organize and manage the huge amounts of users, thereby further managing their contents, we address the task of user classification in a more granular, event-based approach in this paper. By analyzing real data collected from Sina Weibo, we investigate the Weibo properties and utilize both content information and social network information to classify the numerous users into four primary groups: celebrities, organizations/media accounts, grassroots stars, and ordinary individuals. The experiments results show that our method identifies the user categories accurately. PMID:25133235

  8. Event-based user classification in Weibo media.

    PubMed

    Guo, Liang; Wang, Wendong; Cheng, Shiduan; Que, Xirong

    2014-01-01

    Weibo media, known as the real-time microblogging services, has attracted massive attention and support from social network users. Weibo platform offers an opportunity for people to access information and changes the way people acquire and disseminate information significantly. Meanwhile, it enables people to respond to the social events in a more convenient way. Much of the information in Weibo media is related to some events. Users who post different contents, and exert different behavior or attitude may lead to different contribution to the specific event. Therefore, classifying the large amount of uncategorized social circles generated in Weibo media automatically from the perspective of events has been a promising task. Under this circumstance, in order to effectively organize and manage the huge amounts of users, thereby further managing their contents, we address the task of user classification in a more granular, event-based approach in this paper. By analyzing real data collected from Sina Weibo, we investigate the Weibo properties and utilize both content information and social network information to classify the numerous users into four primary groups: celebrities, organizations/media accounts, grassroots stars, and ordinary individuals. The experiments results show that our method identifies the user categories accurately.

  9. International online support to process optimisation and operation decisions.

    PubMed

    Onnerth, T B; Eriksson, J

    2002-01-01

    The information level at all technical facilities has developed from almost nothing 30-40 years ago to advanced IT--Information Technology--systems based on both chemical and mechanical on-line sensors for process and equipment. Still the basic part of information is to get the right data at the right time for the decision to be made. Today a large amount of operational data is available at almost any European wastewater treatment plant, from laboratory and SCADA. The difficult part is to determine which data to keep, which to use in calculations and how and where to make data available. With the STARcontrol system it is possible to separate only process relevant data to use for on-line control and reporting at engineering level, to optimise operation. Furthermore, the use of IT makes it possible to communicate internationally, with full access to the whole amount of data on the single plant. In this way, expert supervision can be both very local in local language e.g. Polish and at the same time very professional with Danish experts advising on Danish processes in Poland or Sweden where some of the 12 STARcontrol systems are running.

  10. Predator size and the suitability of a common prey.

    PubMed

    Erickson, Kristin S; Morse, D H

    1997-02-01

     Although a predator's mass should influence the suitability of its prey, this subject has received little direct attention. We studied the capture and processing of an abundant syrphid fly Toxomerus marginatus (c. 4 mg) by 0.6- to 40-mg juvenile crab spiders Misumena vatia (Thomisidae) to determine how profitability, relative profitability (profitability/predator mass), overall gain in mass, and relative gain in mass differed with predator mass, and whether foraging changed concurrently. In multi-prey experiments, the smallest successful spiders (0.6-3.0 mg) extracted less mass from flies, and did so more slowly, than large spiders. This gain was proportionately similar to that of 10- to 40-mg spiders with access to many Toxomerus. However, many small spiders failed to capture flies. When we gave spiders only a single Toxomerus, the smallest ones again extracted mass more slowly than the large ones and increased in mass less than the large ones, but increased in mass proportionately more than large ones. Relative gain in mass from a single prey decreased with increasing spider mass. Spiders larger than 10 mg all extracted similar amounts of mass from a single Toxomerus at similar rates, but varied in time spent between captures. Thus, Toxomerus changes with spider mass from a large, hard-to-capture bonanza to a small, easy-to-capture item of low per capita value. However, Toxomerus is common enough that large spiders can capture it en masse, thereby compensating for its decline in per capita value.

  11. Querying Large Biological Network Datasets

    ERIC Educational Resources Information Center

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  12. Toward Accessing Spatial Structure from Building Information Models

    NASA Astrophysics Data System (ADS)

    Schultz, C.; Bhatt, M.

    2011-08-01

    Data about building designs and layouts is becoming increasingly more readily available. In the near future, service personal (such as maintenance staff or emergency rescue workers) arriving at a building site will have immediate real-time access to enormous amounts of data relating to structural properties, utilities, materials, temperature, and so on. The critical problem for users is the taxing and error prone task of interpreting such a large body of facts in order to extract salient information. This is necessary for comprehending a situation and deciding on a plan of action, and is a particularly serious issue in time-critical and safety-critical activities such as firefighting. Current unifying building models such as the Industry Foundation Classes (IFC), while being comprehensive, do not directly provide data structures that focus on spatial reasoning and spatial modalities that are required for high-level analytical tasks. The aim of the research presented in this paper is to provide computational tools for higher level querying and reasoning that shift the cognitive burden of dealing with enormous amounts of data away from the user. The user can then spend more energy and time in planning and decision making in order to accomplish the tasks at hand. We present an overview of our framework that provides users with an enhanced model of "built-up space". In order to test our approach using realistic design data (in terms of both scale and the nature of the building models) we describe how our system interfaces with IFC, and we conduct timing experiments to determine the practicality of our approach. We discuss general computational approaches for deriving higher-level spatial modalities by focusing on the example of route graphs. Finally, we present a firefighting scenario with alternative route graphs to motivate the application of our framework.

  13. The digital divide in public e-health: barriers to accessibility and privacy in state health department websites.

    PubMed

    West, Darrell M; Miller, Edward Alan

    2006-08-01

    State health departments have placed a tremendous amount of information, data, and services online in recent years. With the significant increase in online resources at official health sites, though, have come questions concerning equity of access and the confidentiality of electronic medical materials. This paper reports on an examination of public health department websites maintained by the 50 state governments. Using a content analysis of health department sites undertaken each year from 2000 to 2005, we investigate several dimensions of accessibility and privacy: readability levels, disability access, non-English accessibility, and the presence of privacy and security statements. We argue that although progress has been made at improving the accessibility and confidentiality of health department electronic resources, there remains much work to be done to ensure quality access for all Americans in the area of public e-health.

  14. Filtering Access to Internet Content at Higher Education Institutions: Stakeholder Perceptions and Their Impact on Research and Academic Freedom

    ERIC Educational Resources Information Center

    Orenstein, David I.

    2009-01-01

    Hardware and software filters, which sift through keywords placed in Internet search engines and online databases, work to limit the return of information from these sources. By their very purpose, filters exist to decrease the amount of information researchers can access. The purpose of this study is to gain insight into the perceptions key…

  15. Amounts of Down Woody Materials for Mixed-Oak Forests in Kentucky, Virginia, Tennessee, and North Carolina

    Treesearch

    David C. Chojnacky; Thomas M. Schuler

    2004-01-01

    Fallen or down dead wood is a key element in healthy forest ecosystems. Although the amount of down wood and shrubs can provide critical information to forest resource managers for assessing fire fuel build up, data on biomass of down woody materials (DWM) are not readily accessible using existing databases. We summarized data collected by the USDA Forest Service'...

  16. Shielding analyses for repetitive high energy pulsed power accelerators

    NASA Astrophysics Data System (ADS)

    Jow, H. N.; Rao, D. V.

    Sandia National Laboratories (SNL) designs, tests and operates a variety of accelerators that generate large amounts of high energy Bremsstrahlung radiation over an extended time. Typically, groups of similar accelerators are housed in a large building that is inaccessible to the general public. To facilitate independent operation of each accelerator, test cells are constructed around each accelerator to shield it from the radiation workers occupying surrounding test cells and work-areas. These test cells, about 9 ft. high, are constructed of high density concrete block walls that provide direct radiation shielding. Above the target areas (radiation sources), lead or steel plates are used to minimize skyshine radiation. Space, accessibility and cost considerations impose certain restrictions on the design of these test cells. SNL Health Physics division is tasked to evaluate the adequacy of each test cell design and compare resultant dose rates with the design criteria stated in DOE Order 5480.11. In response, SNL Health Physics has undertaken an intensive effort to assess existing radiation shielding codes and compare their predictions against measured dose rates. This paper provides a summary of the effort and its results.

  17. Simple system--substantial share: the use of Dictyostelium in cell biology and molecular medicine.

    PubMed

    Müller-Taubenberger, Annette; Kortholt, Arjan; Eichinger, Ludwig

    2013-02-01

    Dictyostelium discoideum offers unique advantages for studying fundamental cellular processes, host-pathogen interactions as well as the molecular causes of human diseases. The organism can be easily grown in large amounts and is amenable to diverse biochemical, cell biological and genetic approaches. Throughout their life cycle Dictyostelium cells are motile, and thus are perfectly suited to study random and directed cell motility with the underlying changes in signal transduction and the actin cytoskeleton. Dictyostelium is also increasingly used for the investigation of human disease genes and the crosstalk between host and pathogen. As a professional phagocyte it can be infected with several human bacterial pathogens and used to study the infection process. The availability of a large number of knock-out mutants renders Dictyostelium particularly useful for the elucidation and investigation of host cell factors. A powerful armory of molecular genetic techniques that have been continuously expanded over the years and a well curated genome sequence, which is accessible via the online database dictyBase, considerably strengthened Dictyostelium's experimental attractiveness and its value as model organism. Copyright © 2012 Elsevier GmbH. All rights reserved.

  18. Non-symbolic halving in an Amazonian indigene group

    PubMed Central

    McCrink, Koleen; Spelke, Elizabeth S.; Dehaene, Stanislas; Pica, Pierre

    2014-01-01

    Much research supports the existence of an Approximate Number System (ANS) that is recruited by infants, children, adults, and non-human animals to generate coarse, non-symbolic representations of number. This system supports simple arithmetic operations such as addition, subtraction, and ordering of amounts. The current study tests whether an intuition of a more complex calculation, division, exists in an indigene group in the Amazon, the Mundurucu, whose language includes no words for large numbers. Mundurucu children were presented with a video event depicting a division transformation of halving, in which pairs of objects turned into single objects, reducing the array's numerical magnitude. Then they were tested on their ability to calculate the outcome of this division transformation with other large-number arrays. The Mundurucu children effected this transformation even when non-numerical variables were controlled, performed above chance levels on the very first set of test trials, and exhibited performance similar to urban children who had access to precise number words and a surrounding symbolic culture. We conclude that a halving calculation is part of the suite of intuitive operations supported by the ANS. PMID:23587042

  19. Complex Networks in Different Languages: A Study of an Emergent Multilingual Encyclopedia

    NASA Astrophysics Data System (ADS)

    Pembe, F. Canan; Bingol, Haluk

    There is an increasing interest to the study of complex networks in an interdisciplinary way. Language, as a complex network, has been a part of this study due to its importance in human life. Moreover, the Internet has also been at the center of this study by making access to large amounts of information possible. With these ideas in mind, this work aims to evaluate conceptual networks in different languages with the data from a large and open source of information in the Internet, namely Wikipedia. As an evolving multilingual encyclopedia that can be edited by any Internet user, Wikipedia is a good example of an emergent complex system. In this paper, different from previous work on conceptual networks which usually concentrated on single languages, we concentrate on possible ways to compare the usages of different languages and possibly the underlying cultures. This also involves the analysis of local network properties around certain coneepts in different languages. For an initial evaluation, the concept "family" is used to compare the English and German Wikipedias. Although, the work is currently at the beginning, the results are promising.

  20. SILVA tree viewer: interactive web browsing of the SILVA phylogenetic guide trees.

    PubMed

    Beccati, Alan; Gerken, Jan; Quast, Christian; Yilmaz, Pelin; Glöckner, Frank Oliver

    2017-09-30

    Phylogenetic trees are an important tool to study the evolutionary relationships among organisms. The huge amount of available taxa poses difficulties in their interactive visualization. This hampers the interaction with the users to provide feedback for the further improvement of the taxonomic framework. The SILVA Tree Viewer is a web application designed for visualizing large phylogenetic trees without requiring the download of any software tool or data files. The SILVA Tree Viewer is based on Web Geographic Information Systems (Web-GIS) technology with a PostgreSQL backend. It enables zoom and pan functionalities similar to Google Maps. The SILVA Tree Viewer enables access to two phylogenetic (guide) trees provided by the SILVA database: the SSU Ref NR99 inferred from high-quality, full-length small subunit sequences, clustered at 99% sequence identity and the LSU Ref inferred from high-quality, full-length large subunit sequences. The Tree Viewer provides tree navigation, search and browse tools as well as an interactive feedback system to collect any kinds of requests ranging from taxonomy to data curation and improving the tool itself.

  1. Videometric Applications in Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Radeztsky, R. H.; Liu, Tian-Shu

    1997-01-01

    Videometric measurements in wind tunnels can be very challenging due to the limited optical access, model dynamics, optical path variability during testing, large range of temperature and pressure, hostile environment, and the requirements for high productivity and large amounts of data on a daily basis. Other complications for wind tunnel testing include the model support mechanism and stringent surface finish requirements for the models in order to maintain aerodynamic fidelity. For these reasons nontraditional photogrammetric techniques and procedures sometimes must be employed. In this paper several such applications are discussed for wind tunnels which include test conditions with Mach number from low speed to hypersonic, pressures from less than an atmosphere to nearly seven atmospheres, and temperatures from cryogenic to above room temperature. Several of the wind tunnel facilities are continuous flow while one is a short duration blowdown facility. Videometric techniques and calibration procedures developed to measure angle of attack, the change in wing twist and bending induced by aerodynamic load, and the effects of varying model injection rates are described. Some advantages and disadvantages of these techniques are given and comparisons are made with non-optical and more traditional video photogrammetric techniques.

  2. Profiling and Improving I/O Performance of a Large-Scale Climate Scientific Application

    NASA Technical Reports Server (NTRS)

    Liu, Zhuo; Wang, Bin; Wang, Teng; Tian, Yuan; Xu, Cong; Wang, Yandong; Yu, Weikuan; Cruz, Carlos A.; Zhou, Shujia; Clune, Tom; hide

    2013-01-01

    Exascale computing systems are soon to emerge, which will pose great challenges on the huge gap between computing and I/O performance. Many large-scale scientific applications play an important role in our daily life. The huge amounts of data generated by such applications require highly parallel and efficient I/O management policies. In this paper, we adopt a mission-critical scientific application, GEOS-5, as a case to profile and analyze the communication and I/O issues that are preventing applications from fully utilizing the underlying parallel storage systems. Through in-detail architectural and experimental characterization, we observe that current legacy I/O schemes incur significant network communication overheads and are unable to fully parallelize the data access, thus degrading applications' I/O performance and scalability. To address these inefficiencies, we redesign its I/O framework along with a set of parallel I/O techniques to achieve high scalability and performance. Evaluation results on the NASA discover cluster show that our optimization of GEOS-5 with ADIOS has led to significant performance improvements compared to the original GEOS-5 implementation.

  3. Big data analytics as a service infrastructure: challenges, desired properties and solutions

    NASA Astrophysics Data System (ADS)

    Martín-Márquez, Manuel

    2015-12-01

    CERN's accelerator complex generates a very large amount of data. A large volumen of heterogeneous data is constantly generated from control equipment and monitoring agents. These data must be stored and analysed. Over the decades, CERN's researching and engineering teams have applied different approaches, techniques and technologies for this purpose. This situation has minimised the necessary collaboration and, more relevantly, the cross data analytics over different domains. These two factors are essential to unlock hidden insights and correlations between the underlying processes, which enable better and more efficient daily-based accelerator operations and more informed decisions. The proposed Big Data Analytics as a Service Infrastructure aims to: (1) integrate the existing developments; (2) centralise and standardise the complex data analytics needs for CERN's research and engineering community; (3) deliver real-time, batch data analytics and information discovery capabilities; and (4) provide transparent access and Extract, Transform and Load (ETL), mechanisms to the various and mission-critical existing data repositories. This paper presents the desired objectives and properties resulting from the analysis of CERN's data analytics requirements; the main challenges: technological, collaborative and educational and; potential solutions.

  4. Paradigms and Paradoxes: Dawn at Vesta

    NASA Technical Reports Server (NTRS)

    Raymond, C. A.; Russell, C. T.; Mittlefehldt, D. W.

    2014-01-01

    While confirming the popular paradigm of Vesta as the parent body of the HED meteorites, Dawn measurements have discovered many unexpected aspects of the vestan surface. First, an olivine layer was not found in the bottom of the large basin near the south pole of Vesta. In fact, while patches of olivine have been found in the north, it is rare on the surface. Secondly, while Vesta has little gravity and appears to have completely differentiated, it is not completely dry evidence for transient flows and pits resulting from devolatization have been found, implying a substantial amount of accessible water. Thirdly, transport of material to the surface of Vesta from elsewhere in the asteroid belt appears as dark material buried near the top of the crust to Vesta. This may have arrived in a single large impact and been spread around the surface and buried, later to be re-excavated. However, it is not certain that this is the only scenario possible for the source of this material. In short, Dawn's observations of Vesta have been both reassuring but unsettling at the same time.

  5. Energy-balanced algorithm for RFID estimation

    NASA Astrophysics Data System (ADS)

    Zhao, Jumin; Wang, Fangyuan; Li, Dengao; Yan, Lijuan

    2016-10-01

    RFID has been widely used in various commercial applications, ranging from inventory control, supply chain management to object tracking. It is necessary for us to estimate the number of RFID tags deployed in a large area periodically and automatically. Most of the prior works use passive tags to estimate and focus on designing time-efficient algorithms that can estimate tens of thousands of tags in seconds. But for a RFID reader to access tags in a large area, active tags are likely to be used due to their longer operational ranges. But these tags use their own battery as energy supplier. Hence, conserving energy for active tags becomes critical. Some prior works have studied how to reduce energy expenditure of a RFID reader when it reads tags IDs. In this paper, we study how to reduce the amount of energy consumed by active tags during the process of estimating the number of tags in a system and make the energy every tag consumed balanced approximately. We design energy-balanced estimation algorithm that can achieve our goal we mentioned above.

  6. Computational biomedicine: a challenge for the twenty-first century.

    PubMed

    Coveney, Peter V; Shublaq, Nour W

    2012-01-01

    With the relentless increase of computer power and the widespread availability of digital patient-specific medical data, we are now entering an era when it is becoming possible to develop predictive models of human disease and pathology, which can be used to support and enhance clinical decision-making. The approach amounts to a grand challenge to computational science insofar as we need to be able to provide seamless yet secure access to large scale heterogeneous personal healthcare data in a facile way, typically integrated into complex workflows-some parts of which may need to be run on high performance computers-in a facile way that is integrated into clinical decision support software. In this paper, we review the state of the art in terms of case studies drawn from neurovascular pathologies and HIV/AIDS. These studies are representative of a large number of projects currently being performed within the Virtual Physiological Human initiative. They make demands of information technology at many scales, from the desktop to national and international infrastructures for data storage and processing, linked by high performance networks.

  7. Toppar: an interactive browser for viewing association study results.

    PubMed

    Juliusdottir, Thorhildur; Banasik, Karina; Robertson, Neil R; Mott, Richard; McCarthy, Mark I

    2018-06-01

    Data integration and visualization help geneticists make sense of large amounts of data. To help facilitate interpretation of genetic association data we developed Toppar, a customizable visualization tool that stores results from association studies and enables browsing over multiple results, by combining features from existing tools and linking to appropriate external databases. Detailed information on Toppar's features and functionality are on our website http://mccarthy.well.ox.ac.uk/toppar/docs along with instructions on how to download, install and run Toppar. Our online version of Toppar is accessible from the website and can be test-driven using Firefox, Safari or Chrome on sub-sets of publicly available genome-wide association study anthropometric waist and body mass index data (Locke et al., 2015; Shungin et al., 2015) from the Genetic Investigation of ANthropometric Traits consortium. totajuliusd@gmail.com.

  8. Practical device-independent quantum cryptography via entropy accumulation.

    PubMed

    Arnon-Friedman, Rotem; Dupuis, Frédéric; Fawzi, Omar; Renner, Renato; Vidick, Thomas

    2018-01-31

    Device-independent cryptography goes beyond conventional quantum cryptography by providing security that holds independently of the quality of the underlying physical devices. Device-independent protocols are based on the quantum phenomena of non-locality and the violation of Bell inequalities. This high level of security could so far only be established under conditions which are not achievable experimentally. Here we present a property of entropy, termed "entropy accumulation", which asserts that the total amount of entropy of a large system is the sum of its parts. We use this property to prove the security of cryptographic protocols, including device-independent quantum key distribution, while achieving essentially optimal parameters. Recent experimental progress, which enabled loophole-free Bell tests, suggests that the achieved parameters are technologically accessible. Our work hence provides the theoretical groundwork for experimental demonstrations of device-independent cryptography.

  9. Adipose tissue derived mesenchymal stem cells for musculoskeletal repair in veterinary medicine

    PubMed Central

    Arnhold, Stefan; Wenisch, Sabine

    2015-01-01

    Adipose tissue derived stem cells (ASCs) are mesenchymal stem cells which can be obtained from different adipose tissue sources within the body. It is an abundant cell pool, which is easy accessible and the cells can be obtained in large numbers, cultivated and expanded in vitro and prepared for tissue engineering approaches, especially for skeletal tissue repair. In the recent years this cell population has attracted a great amount of attention among researchers in human as well as in veterinary medicine. In the meantime ASCs have been well characterized and their use in regenerative medicine is very well established. This review focuses on the characterization of ASCs for their use for tissue engineering approaches especially in veterinary medicine and also highlights a selection of clinical trials on the basis of ASCs as the relevant cell source. PMID:25973326

  10. Adipose tissue derived mesenchymal stem cells for musculoskeletal repair in veterinary medicine.

    PubMed

    Arnhold, Stefan; Wenisch, Sabine

    2015-01-01

    Adipose tissue derived stem cells (ASCs) are mesenchymal stem cells which can be obtained from different adipose tissue sources within the body. It is an abundant cell pool, which is easy accessible and the cells can be obtained in large numbers, cultivated and expanded in vitro and prepared for tissue engineering approaches, especially for skeletal tissue repair. In the recent years this cell population has attracted a great amount of attention among researchers in human as well as in veterinary medicine. In the meantime ASCs have been well characterized and their use in regenerative medicine is very well established. This review focuses on the characterization of ASCs for their use for tissue engineering approaches especially in veterinary medicine and also highlights a selection of clinical trials on the basis of ASCs as the relevant cell source.

  11. Quantum-locked key distribution at nearly the classical capacity rate.

    PubMed

    Lupo, Cosmo; Lloyd, Seth

    2014-10-17

    Quantum data locking is a protocol that allows for a small secret key to (un)lock an exponentially larger amount of information, hence yielding the strongest violation of the classical one-time pad encryption in the quantum setting. This violation mirrors a large gap existing between two security criteria for quantum cryptography quantified by two entropic quantities: the Holevo information and the accessible information. We show that the latter becomes a sensible security criterion if an upper bound on the coherence time of the eavesdropper's quantum memory is known. Under this condition, we introduce a protocol for secret key generation through a memoryless qudit channel. For channels with enough symmetry, such as the d-dimensional erasure and depolarizing channels, this protocol allows secret key generation at an asymptotic rate as high as the classical capacity minus one bit.

  12. AI in medicine on its way from knowledge-intensive to data-intensive systems.

    PubMed

    Horn, W

    2001-08-01

    The last 20 years of research and development in the field of artificial intelligence in medicine (AIM) show a path from knowledge-intensive systems, which try to capture the essential knowledge of experts in a knowledge-based system, to data-intensive systems available today. Nowadays enormous amounts of information is accessible electronically. Large datasets are collected continuously monitoring physiological parameters of patients. Knowledge-based systems are needed to make use of all these data available and to help us to cope with the information explosion. In addition, temporal data analysis and intelligent information visualization can help us to get a summarized view of the change over time of clinical parameters. Integrating AIM modules into the daily-routine software environment of our care providers gives us a great chance for maintaining and improving quality of care.

  13. A Hybrid Multilevel Storage Architecture for Electric Power Dispatching Big Data

    NASA Astrophysics Data System (ADS)

    Yan, Hu; Huang, Bibin; Hong, Bowen; Hu, Jing

    2017-10-01

    Electric power dispatching is the center of the whole power system. In the long run time, the power dispatching center has accumulated a large amount of data. These data are now stored in different power professional systems and form lots of information isolated islands. Integrating these data and do comprehensive analysis can greatly improve the intelligent level of power dispatching. In this paper, a hybrid multilevel storage architecture for electrical power dispatching big data is proposed. It introduces relational database and NoSQL database to establish a power grid panoramic data center, effectively meet power dispatching big data storage needs, including the unified storage of structured and unstructured data fast access of massive real-time data, data version management and so on. It can be solid foundation for follow-up depth analysis of power dispatching big data.

  14. RNAi downregulation of three key lignin genes in sugarcane improves glucose release without reduction in sugar production

    DOE PAGES

    Bewg, William P.; Poovaiah, Charleson; Lan, Wu; ...

    2016-12-20

    Sugarcane is a subtropical crop that produces large amounts of biomass annually. It is a key agricultural crop in many countries for the production of sugar and other products. Residual bagasse following sucrose extraction is currently underutilized and it has potential as a carbohydrate source for the production of biofuels. As with all lignocellulosic crops, lignin acts as a barrier to accessing the polysaccharides, and as such, is the focus of transgenic efforts. In this study, we used RNAi to individually reduce the expression of three key genes in the lignin biosynthetic pathway in sugarcane. Furthermore, these genes, caffeoyl-CoA O-methyltransferasemore » ( CCoAOMT), ferulate 5-hydroxylase ( F5H) and caffeic acid O-methyltransferase ( COMT), impact lignin content and/or composition.« less

  15. Framework for Deploying a Virtualized Computing Environment for Collaborative and Secure Data Analytics

    PubMed Central

    Meyer, Adrian; Green, Laura; Faulk, Ciearro; Galla, Stephen; Meyer, Anne-Marie

    2016-01-01

    Introduction: Large amounts of health data generated by a wide range of health care applications across a variety of systems have the potential to offer valuable insight into populations and health care systems, but robust and secure computing and analytic systems are required to leverage this information. Framework: We discuss our experiences deploying a Secure Data Analysis Platform (SeDAP), and provide a framework to plan, build and deploy a virtual desktop infrastructure (VDI) to enable innovation, collaboration and operate within academic funding structures. It outlines 6 core components: Security, Ease of Access, Performance, Cost, Tools, and Training. Conclusion: A platform like SeDAP is not simply successful through technical excellence and performance. It’s adoption is dependent on a collaborative environment where researchers and users plan and evaluate the requirements of all aspects. PMID:27683665

  16. RNAi downregulation of three key lignin genes in sugarcane improves glucose release without reduction in sugar production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bewg, William P.; Poovaiah, Charleson; Lan, Wu

    Sugarcane is a subtropical crop that produces large amounts of biomass annually. It is a key agricultural crop in many countries for the production of sugar and other products. Residual bagasse following sucrose extraction is currently underutilized and it has potential as a carbohydrate source for the production of biofuels. As with all lignocellulosic crops, lignin acts as a barrier to accessing the polysaccharides, and as such, is the focus of transgenic efforts. In this study, we used RNAi to individually reduce the expression of three key genes in the lignin biosynthetic pathway in sugarcane. Furthermore, these genes, caffeoyl-CoA O-methyltransferasemore » ( CCoAOMT), ferulate 5-hydroxylase ( F5H) and caffeic acid O-methyltransferase ( COMT), impact lignin content and/or composition.« less

  17. Making GRADE accessible: a proposal for graphic display of evidence quality assessments.

    PubMed

    Khan, Khalid S; Borowiack, Ewa; Roos, Carolien; Kowalska, Monika; Zapalska, Anna; Mol, Ben W; Mignini, Luciano; Meads, Catherine; Walczak, Jacek

    2011-06-01

    When generating guidelines, quality of evidence is frequently reported in tabulated form capturing several domains, for example, study design, risk of bias and heterogeneity. Increasingly, this is done using the Grading of Recommendations Assessment, Development and Evaluation approach. As assimilating large amount of tabulated data across several comparisons and outcomes spread over many pages (sometimes hundreds) is not easy, there is a need to present evidence summaries in a more effective way. A graphic display plotting the several domains used in evidence grading on equiangular spokes starting from the same point, the data length of each spoke proportional to the magnitude of the quality, succinctly captures tabulated information. These plots allow easy identification of deficiencies, outliers and similarities in evidence quality for individual and multiple comparisons and outcomes, paving the way for their routine use alongside tabulated information.

  18. Conflicting Online Health Information and Rational Decision Making: Implication for Cancer Survivors.

    PubMed

    Yoon, Heesoo; Sohn, Minsung; Choi, Mankyu; Jung, Minsoo

    Although people in the social media age can access health information easier, they have difficulty judging conflicting rational information or summarizing the large amounts of health information available. Conflicting health information occurs when contrary assertions or information about a certain health issue comes from different information sources. This study examined the background knowledge and the current phenomenon of why conflicting health information occurs in real-world conditions. We also reviewed causes and solutions by reviewing the literature. In particular, we recommend a method that solves problems that patients have including cancer survivors who cannot themselves be active in seeking health information. Thus, we categorized the specific types of conflicting health information and analyzed the sociodemographic factors and information carrier factors that have an impact on the health information-seeking behavior of individuals.

  19. Production of plant-derived polyphenols in microorganisms: current state and perspectives.

    PubMed

    Milke, Lars; Aschenbrenner, Jennifer; Marienhagen, Jan; Kallscheuer, Nicolai

    2018-02-01

    Plants synthesize several thousand different polyphenols of which many have the potential to aid in preventing or treating cancer, cardiovascular, and neurodegenerative diseases. However, plants usually contain complex polyphenol mixtures impeding access to individual compounds in larger quantities. In contrast, functional integration of biosynthetic plant polyphenol pathways into microorganisms allows for the production of individual polyphenols as chemically distinct compounds, which can be synthesized in large amounts and can be more easily isolated. Over the last decade, microbial synthesis of many plant polyphenols could be achieved, and along the way, many decisive bottlenecks in the endogenous microbial host metabolism as well as in the heterologous plant pathways could be identified. In this review, we present recent advancements in metabolic engineering of microorganisms for the production of plant polyphenols and discuss how current challenges could be addressed in the future.

  20. Distinct modes of DNA accessibility in plant chromatin.

    PubMed

    Shu, Huan; Wildhaber, Thomas; Siretskiy, Alexey; Gruissem, Wilhelm; Hennig, Lars

    2012-01-01

    The accessibility of DNA to regulatory proteins is a major property of the chromatin environment that favours or hinders transcription. Recent studies in flies reported that H3K9me2-marked heterochromatin is accessible while H3K27me3-marked chromatin forms extensive domains of low accessibility. Here we show that plants regulate DNA accessibility differently. H3K9me2-marked heterochromatin is the least accessible in the Arabidopsis thaliana genome, and H3K27me3-marked chromatin also has low accessibility. We see that very long genes without H3K9me2 or H3K27me3 are often inaccessible and generated significantly lower amounts of antisense transcripts than other genes, suggesting that reduced accessibility is associated with reduced recognition of alternative promoters. Low accessibility of H3K9me2-marked heterochromatin and long genes depend on cytosine methylation, explaining why chromatin accessibility differs between plants and flies. Together, we conclude that restriction of DNA accessibility is a local property of chromatin and not necessarily a consequence of microscopically visible compaction.

  1. 50 CFR 85.22 - Grant proposals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) FINANCIAL ASSISTANCE-WILDLIFE SPORT FISH RESTORATION PROGRAM CLEAN VESSEL ACT GRANT PROGRAM Application for..., education, sensitive waters, public access, and estimated costs; (e) Amount and source of matching funds...

  2. Less is more: prolonged intermittent access cocaine self-administration produces incentive-sensitization and addiction-like behavior.

    PubMed

    Kawa, Alex B; Bentzley, Brandon S; Robinson, Terry E

    2016-10-01

    Contemporary animal models of cocaine addiction focus on increasing the amount of drug consumption to produce addiction-like behavior. However, another critical factor is the temporal pattern of consumption, which in humans is characterized by intermittency, both within and between bouts of use. To model this, we combined prolonged access to cocaine (∼70 days in total) with an intermittent access (IntA) self-administration procedure and used behavioral economic indicators to quantify changes in motivation for cocaine. IntA produced escalation of intake, a progressive increase in cocaine demand (incentive-sensitization), and robust drug- and cue-induced reinstatement of drug-seeking behavior. We also asked whether rats that vary in their propensity to attribute incentive salience to reward cues (sign-trackers [STs] vs. goal-trackers [GTs]) vary in the development of addiction-like behavior. Although STs were more motivated to take cocaine after limited drug experience, after IntA, STs and GTs no longer differed on any measure of addiction-like behavior. Exposure to large quantities of cocaine is not necessary for escalation of intake, incentive-sensitization, or other addiction-like behaviors (IntA results in far less total cocaine consumption than 'long access' procedures). Also, the ST phenotype may increase susceptibility to addiction, not because STs are inherently susceptible to incentive-sensitization (perhaps all individuals are at risk), but because this phenotype promotes continued drug use, subjecting them to incentive-sensitization. Thus, the pharmacokinetics associated with the IntA procedure are especially effective in producing a number of addiction-like behaviors and may be valuable for studying associated neuroadaptations and for assessing individual variation in vulnerability.

  3. Astronomy: On the Bleeding Edge of Scholarly Infrastructure

    NASA Astrophysics Data System (ADS)

    Borgman, Christine; Sands, A.; Wynholds, L. A.

    2013-01-01

    The infrastructure for scholarship has moved online, making data, articles, papers, journals, catalogs, and other scholarly resources nodes in a deeply interconnected network. Astronomy has led the way on several fronts, developing tools such as ADS to provide unified access to astronomical publications and reaching agreement on a common data file formats such as FITS. Astronomy also was among the first fields to establish open access to substantial amounts of observational data. We report on the first three years of a long-term research project to study knowledge infrastructures in astronomy, funded by the NSF and the Alfred P. Sloan Foundation. Early findings indicate that the availability and use of networked technologies for integrating scholarly resources varies widely within astronomy. Substantial differences arise in the management of data between ground-based and space-based missions and between subfields of astronomy, for example. While large databases such as SDSS and MAST are essential resources for many researchers, much pointed, ground-based observational data exist only on local servers, with minimal curation. Some astronomy data are easily discoverable and usable, but many are not. International coordination activities such as IVOA and distributed access to high-level data products servers such as SIMBAD and NED are enabling further integration of published data. Astronomers are tackling yet more challenges in new forms of publishing data, algorithms, visualizations, and in assuring interoperability with parallel infrastructure efforts in related fields. New issues include data citation, attribution, and provenance. Substantial concerns remain for the long term discoverability, accessibility, usability, and curation of astronomy data and other scholarly resources. The presentation will outline these challenges, how they are being addressed by astronomy and related fields, and identify concerns and accomplishments expressed by the astronomers we have interviewed and observed.

  4. Reduced representation approaches to interrogate genome diversity in large repetitive plant genomes.

    PubMed

    Hirsch, Cory D; Evans, Joseph; Buell, C Robin; Hirsch, Candice N

    2014-07-01

    Technology and software improvements in the last decade now provide methodologies to access the genome sequence of not only a single accession, but also multiple accessions of plant species. This provides a means to interrogate species diversity at the genome level. Ample diversity among accessions in a collection of species can be found, including single-nucleotide polymorphisms, insertions and deletions, copy number variation and presence/absence variation. For species with small, non-repetitive rich genomes, re-sequencing of query accessions is robust, highly informative, and economically feasible. However, for species with moderate to large sized repetitive-rich genomes, technical and economic barriers prevent en masse genome re-sequencing of accessions. Multiple approaches to access a focused subset of loci in species with larger genomes have been developed, including reduced representation sequencing, exome capture and transcriptome sequencing. Collectively, these approaches have enabled interrogation of diversity on a genome scale for large plant genomes, including crop species important to worldwide food security. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  5. SAVS: A Space and Atmospheric Visualization Science system

    NASA Technical Reports Server (NTRS)

    Szuszczewicz, E. P.; Mankofsky, A.; Blanchard, P.; Goodrich, C.; McNabb, D.; Kamins, D.

    1995-01-01

    The research environment faced by space and atmospheric scientists in the 1990s is characterized by unprecedented volumes of new data, by ever-increasing repositories of unexploited mission files, and by the widespread use of empirical and large-scale computational models needed for the synthesis of understanding across data sets and discipline boundaries. The effective analysis and interpretation of such massive amounts of information have become the subjects of legitimate concern. With SAVS (a Space and Atmospheric Visualization Science System), we address these issues by creating a 'push-button' software environment that mimics the logical scientific processes in data acquisition, reduction, and analysis without requiring a detailed understanding of the methods, networks, and modules that link the tools and effectively execute the functions. SAVS provides (1) a customizable framework for accessing a powerful set of visualization tools based on the popular AVS visualization software with hooks to PV-Wave and access to Khoros modules, (2) a set of mathematical and statistical tools, (3) an extensible library of discipline-specific functions and models (e.g., MSIS, IRI, Feldstein Oval, IGRF, satellite tracking with CADRE-3, etc.), and (4) capabilities for local and remote data base access. The system treats scalar, vector, and image data, and runs on most common Unix workstations. We present a description of SAVS and its components, followed by several applications based on generic research interests in interplanetary and magnetospheric physics (IMP/ISTP), active experiments in space (CRRES), and mission planning focused on the Earth's thermospheric, ionospheric, and mesospheric domains (TIMED).

  6. An Assessment of the Food and Nutrition Security Status of Weaned 7-12 Months Old Children in Rural and Peri-Urban Communities of Gauteng and Limpopo Provinces, South Africa.

    PubMed

    Ntila, Sithandiwe; Siwela, Muthulisi; Kolanisi, Unathi; Abdelgadir, Hafiz; Ndhlala, Ashwell

    2017-09-01

    This study assessed the food and nutrition security status of children receiving complementary food in rural and peri-urban communities. A group of 106 mothers from Lebowakgomo village and Hammanskraal Township, respectively, participated in the survey. Additionally, six focus group discussions were conducted per study area to assess the mothers' perceptions about children's food access. The Children's Food Insecurity Access Scale (CFIAS) was used to assess the food security status (access) of the children. The Individual Dietary Diversity Score (IDDS) together with the unquantified food consumption frequency survey were used as a proxy measure of the nutritional quality of the children's diets. The age and weight of the children obtained from the children's clinic health cards were used to calculate Weight-for-Age Z scores (WAZ) in order to determine the prevalence of underweight children. The findings showed that a large percentage of children were severely food-insecure, 87% and 78%, in rural and peri-urban areas, respectively. Additionally, Lebowakgomo children (23.6%) and Hammanskraal children (17.9%) were severely underweight. Overall, children's diets in both study areas was characterized by nutrient-deficient complementary foods. Cheaper foods with a longer stomach-filling effect such as white maize meal and sugar were the most commonly purchased and used. Hence, the children consumed very limited amounts of foods rich in proteins, minerals, and vitamins, which significantly increased the risk of their being malnourished.

  7. Internet survey of home storage of paracetamol by individuals in the UK.

    PubMed

    Shah, A D; Wood, D M; Dargan, P I

    2013-03-01

    Paracetamol (acetaminophen) is a common cause of liver failure due to overdose. Legislation introduced in the UK in 1998 to limit pack sizes of paracetamol has had limited impact on the overall number and severity of paracetamol overdoses. This may be because people have large amounts of paracetamol stored at home, but no previous studies have explored this question. Individuals who regularly take part in market research surveys were invited to take part in an Internet survey. They were asked to supply demographic details, the frequency with which they use paracetamol and ibuprofen, and details of the amount and location of these drugs that they possessed. The mean age of respondents was 43.3 years (standard deviation 14.5 years), and 49.9% were female. People with both ibuprofen and paracetamol tended to have more packs and tablets of paracetamol (P < 0.001) and over a third had 32 or more paracetamol tablets. The most common pack size was 16 tablet packs (44.8% of all packs), which accounted for 39.4% of tablets. The most common site of paracetamol storage in the home was the kitchen (63.8% of people, 95% confidence interval 60.7, 66.7). This study suggests that pack size legislation in the UK has had limited effect on the amount of paracetamol that individuals have access to in the home. This may explain, at least in part, the limited impact of the pack size legislation on paracetamol overdoses in the UK.

  8. A Cohort Analysis of Postbariatric Panniculectomy--Current Trends in Surgeon Reimbursement.

    PubMed

    Aherrera, Andrew S; Pandya, Sonal N

    2016-01-01

    The overall number of patients undergoing body contouring procedures after massive weight loss (MWL) has progressively increased over the past decade. The purpose of this study was to evaluate the charges and reimbursements for panniculectomy after MWL at a large academic institution in Massachusetts. A retrospective review was performed and included all identifiable panniculectomy procedures performed at our institution between January 2008 and January 2014. The annual number of patients undergoing panniculectomy, the type of insurance coverage and reimbursement method of each patient, and the amounts billed and reimbursed were evaluated. During our study period, 114 patients underwent a medically necessary panniculectomy as a result of MWL. The average surgeon fee billed was $3496 ± $704 and the average amount reimbursed was $1271 ± $589. Ten cases (8.8%) had no reimbursements, 31 cases (21.8%) reimbursed less than $1000, 66 cases (57.9%) reimbursed between $1000 and $2000, and no cases reimbursed the full amount billed. When evaluated by type of insurance coverage, collection ratios were 37.4% ± 17.4% overall, 41.7% ± 16.4% for private insurance, and 24.0% ± 13.0% for Medicare/Medicaid insurance (P < 0.001). Reimbursements for panniculectomy are remarkably low, and in many instances, absent, despite obtaining previous preauthorization of medical necessity. Although panniculectomy is associated with improvements in quality of life and high levels of patient satisfaction, poor physician reimbursement for this labor intensive procedure may preclude access to appropriate care required by the MWL patient population.

  9. High-Performance Data Analysis Tools for Sun-Earth Connection Missions

    NASA Technical Reports Server (NTRS)

    Messmer, Peter

    2011-01-01

    The data analysis tool of choice for many Sun-Earth Connection missions is the Interactive Data Language (IDL) by ITT VIS. The increasing amount of data produced by these missions and the increasing complexity of image processing algorithms requires access to higher computing power. Parallel computing is a cost-effective way to increase the speed of computation, but algorithms oftentimes have to be modified to take advantage of parallel systems. Enhancing IDL to work on clusters gives scientists access to increased performance in a familiar programming environment. The goal of this project was to enable IDL applications to benefit from both computing clusters as well as graphics processing units (GPUs) for accelerating data analysis tasks. The tool suite developed in this project enables scientists now to solve demanding data analysis problems in IDL that previously required specialized software, and it allows them to be solved orders of magnitude faster than on conventional PCs. The tool suite consists of three components: (1) TaskDL, a software tool that simplifies the creation and management of task farms, collections of tasks that can be processed independently and require only small amounts of data communication; (2) mpiDL, a tool that allows IDL developers to use the Message Passing Interface (MPI) inside IDL for problems that require large amounts of data to be exchanged among multiple processors; and (3) GPULib, a tool that simplifies the use of GPUs as mathematical coprocessors from within IDL. mpiDL is unique in its support for the full MPI standard and its support of a broad range of MPI implementations. GPULib is unique in enabling users to take advantage of an inexpensive piece of hardware, possibly already installed in their computer, and achieve orders of magnitude faster execution time for numerically complex algorithms. TaskDL enables the simple setup and management of task farms on compute clusters. The products developed in this project have the potential to interact, so one can build a cluster of PCs, each equipped with a GPU, and use mpiDL to communicate between the nodes and GPULib to accelerate the computations on each node.

  10. Superficial Burn Wound Healing with Intermittent Negative Pressure Wound Therapy Under Limited Access and Conventional Dressings

    PubMed Central

    Honnegowda, Thittamaranahalli Muguregowda; Padmanabha Udupa, Echalasara Govindarama; Rao, Pragna; Kumar, Pramod; Singh, Rekha

    2016-01-01

    BACKGROUND Thermal injury is associated with several biochemical and histopathological alteration in tissue. Analysis of these objective parameters in research and clinical field are common to determine healing rate of burn wound. Negative pressure wound therapy has been achieved wide success in treating chronic wounds. This study determines superficial burn wound healing with intermittent negative pressure wound therapy under limited access and conventional dressings METHODS A total 50 patients were randomised into two equal groups: limited access and conventional dressing groups. Selective biochemical parameters such as hydroxyproline, hexosamine, total protein, and antioxidants, malondialdhyde (MDA), wound surface pH, matrix metalloproteinase-2 (MMP-2), and nitric oxide (NO) were measured in the granulation tissue. Histopathologically, necrotic tissue, amount of inflammatory infiltrate, angiogenesis and extracellular matrix deposition (ECM) were studied to determine wound healing under intermittent negative pressure. RESULTS Patients treated with limited access have shown significant increase in the mean hydroxyproline, hexosamine, total protein, reduced glutathione (GSH), glutathione peroxidase (GPx), and decrease in MDA, MMP-2, wound surface pH, and NO. Histopathologic study showed that there was a significant difference after 10 days of treatment between limited access vs conventional dressing group, Median (Q1, Q3)=3 (2, 4.25) vs 2 (1.75, 4). CONCLUSION Limited access was shown to exert its beneficial effects on wound healing by increasing ground substance, antioxidants and reducing MMP-2 activity, MDA, NO and providing optimal pH, decreasing necrotic tissue, amount of inflammatory infiltrate, increasing ECM deposition and angiogenesis. PMID:27853690

  11. Non-equilibrium thermodynamics theory of econometric source discovery for large data analysis

    NASA Astrophysics Data System (ADS)

    van Bergem, Rutger; Jenkins, Jeffrey; Benachenhou, Dalila; Szu, Harold

    2014-05-01

    Almost all consumer and firm transactions are achieved using computers and as a result gives rise to increasingly large amounts of data available for analysts. The gold standard in Economic data manipulation techniques matured during a period of limited data access, and the new Large Data Analysis (LDA) paradigm we all face may quickly obfuscate most tools used by Economists. When coupled with an increased availability of numerous unstructured, multi-modal data sets, the impending 'data tsunami' could have serious detrimental effects for Economic forecasting, analysis, and research in general. Given this reality we propose a decision-aid framework for Augmented-LDA (A-LDA) - a synergistic approach to LDA which combines traditional supervised, rule-based Machine Learning (ML) strategies to iteratively uncover hidden sources in large data, the artificial neural network (ANN) Unsupervised Learning (USL) at the minimum Helmholtz free energy for isothermal dynamic equilibrium strategies, and the Economic intuitions required to handle problems encountered when interpreting large amounts of Financial or Economic data. To make the ANN USL framework applicable to economics we define the temperature, entropy, and energy concepts in Economics from non-equilibrium molecular thermodynamics of Boltzmann viewpoint, as well as defining an information geometry, on which the ANN can operate using USL to reduce information saturation. An exemplar of such a system representation is given for firm industry equilibrium. We demonstrate the traditional ML methodology in the economics context and leverage firm financial data to explore a frontier concept known as behavioral heterogeneity. Behavioral heterogeneity on the firm level can be imagined as a firm's interactions with different types of Economic entities over time. These interactions could impose varying degrees of institutional constraints on a firm's business behavior. We specifically look at behavioral heterogeneity for firms that are operating with the label of `Going-Concern' and firms labeled according to institutional influence they may be experiencing, such as constraints on firm hiring/spending while in a Bankruptcy or a Merger procedure. Uncovering invariant features, or behavioral data metrics from observable firm data in an economy can greatly benefit the FED, World Bank, etc. We find that the ML/LDA communities can benefit from Economic intuitions just as much as Economists can benefit from generic data exploration tools. The future of successful Economic data understanding, modeling, simulation, and visualization can be amplified by new A-LDA models and approaches for new and analogous models of Economic system dynamics. The potential benefits of improved economic data analysis and real time decision aid tools are numerous for researchers, analysts, and federal agencies who all deal with increasingly large amounts of complex data to support their decision making.

  12. BCH codes for large IC random-access memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1983-01-01

    In this report some shortened BCH codes for possible applications to large IC random-access memory systems are presented. These codes are given by their parity-check matrices. Encoding and decoding of these codes are discussed.

  13. Segtor: Rapid Annotation of Genomic Coordinates and Single Nucleotide Variations Using Segment Trees

    PubMed Central

    Renaud, Gabriel; Neves, Pedro; Folador, Edson Luiz; Ferreira, Carlos Gil; Passetti, Fabio

    2011-01-01

    Various research projects often involve determining the relative position of genomic coordinates, intervals, single nucleotide variations (SNVs), insertions, deletions and translocations with respect to genes and their potential impact on protein translation. Due to the tremendous increase in throughput brought by the use of next-generation sequencing, investigators are routinely faced with the need to annotate very large datasets. We present Segtor, a tool to annotate large sets of genomic coordinates, intervals, SNVs, indels and translocations. Our tool uses segment trees built using the start and end coordinates of the genomic features the user wishes to use instead of storing them in a database management system. The software also produces annotation statistics to allow users to visualize how many coordinates were found within various portions of genes. Our system currently can be made to work with any species available on the UCSC Genome Browser. Segtor is a suitable tool for groups, especially those with limited access to programmers or with interest to analyze large amounts of individual genomes, who wish to determine the relative position of very large sets of mapped reads and subsequently annotate observed mutations between the reads and the reference. Segtor (http://lbbc.inca.gov.br/segtor/) is an open-source tool that can be freely downloaded for non-profit use. We also provide a web interface for testing purposes. PMID:22069465

  14. Out-of-Core Streamline Visualization on Large Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu

    1997-01-01

    It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.

  15. The Human Right to Water--Market Allocations and Subsistence in a World of Scarcity

    ERIC Educational Resources Information Center

    McAdam, Kevin C.

    2005-01-01

    More than one billion people do not have access to an adequate water supply. In Gambia and Haiti, people live on less than 4 liters of water per day. By contrast, most toilets in the West use several times that amount of water for a single flush. The global distribution of water is making it increasingly difficult for poor people to access it, and…

  16. Ejecta Experiments at the Pegasus Pulsed Power Facility

    DTIC Science & Technology

    1997-06-01

    Laboratory (LANL ). The facility provides both radial and axial access for making measurements. There exist optical, laser , and X-Ray paths for performing...and axial access for making measurements. There exist optical, laser , and X-Ray paths for performing measurements on the target assembly located near...surface variations, microjets can be formed thus contributing to the amount of ejecta. In addition to material properties which contribute to ejecta

  17. High Performance Computing Assets for Ocean Acoustics Research

    DTIC Science & Technology

    2016-11-18

    independently on processing units with access to a typically available amount of memory, say 16 or 32 gigabytes. Our models require each processor to...allow results to be obtained with limited amounts of memory available to individual processing units (with no time frame for successful completion...put into use. One file server computer to store simulation output has also been purchased. The first workstation has 28 CPU cores, dual- thread , (56

  18. 50 CFR Table 2c to Part 660... - 2010, and Beyond, Open Access and Limited Entry Allocations by Species or Species Goup (weights...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... be taken as research catch and in non-groundfish fisheries is 3,000 mt. The commercial OY is 140,996... amount anticipated to be taken during research activity and 0.14 mt for the amount expected to be taken... abundance of an unexploited rockfish population in the California Current ecosystem, a non-quantitative...

  19. Macroscopic characterisations of Web accessibility

    NASA Astrophysics Data System (ADS)

    Lopes, Rui; Carriço, Luis

    2010-12-01

    The Web Science framework poses fundamental questions on the analysis of the Web, by focusing on how microscopic properties (e.g. at the level of a Web page or Web site) emerge into macroscopic properties and phenomena. One research topic on the analysis of the Web is Web accessibility evaluation, which centres on understanding how accessible a Web page is for people with disabilities. However, when framing Web accessibility evaluation on Web Science, we have found that existing research stays at the microscopic level. This article presents an experimental study on framing Web accessibility evaluation into Web Science's goals. This study resulted in novel accessibility properties of the Web not found at microscopic levels, as well as of Web accessibility evaluation processes themselves. We observed at large scale some of the empirical knowledge on how accessibility is perceived by designers and developers, such as the disparity of interpretations of accessibility evaluation tools warnings. We also found a direct relation between accessibility quality and Web page complexity. We provide a set of guidelines for designing Web pages, education on Web accessibility, as well as on the computational limits of large-scale Web accessibility evaluations.

  20. The ecology of prescription opioid abuse in the USA: geographic variation in patients’ use of multiple prescribers (“doctor shopping”)

    PubMed Central

    McDonald, Douglas C.; Carlson, Kenneth E.

    2016-01-01

    Purpose This study estimates the prevalence in US counties of opioid patients who use large numbers of prescribers, the amounts of opioids they obtain, and the extent to which their prevalence is predicted by ecological attributes of counties, including general medical exposure to opioids. Methods Finite mixture models were used to estimate the size of an outlier subpopulation of patients with suspiciously large numbers of prescribers (probable doctor shoppers), using a sample of 146 million opioid prescriptions dispensed during 2008. Ordinary least squares regression models of county-level shopper rates included independent variables measuring ecological attributes of counties, including rates of patients prescribed opioids, socioeconomic characteristics of the resident population, supply of physicians, and measures of healthcare service utilization. Results The prevalence of shoppers varied widely by county, with rates ranging between 0.6 and 2.5 per 1000 residents. Shopper prevalence was strongly correlated with opioid prescribing for the general population, accounting for 30% of observed county variation in shopper prevalence, after adjusting for physician supply, emergency department visits, in-patient hospital days, poverty rates, percent of county residents living in urban areas, and racial/ethnic composition of resident populations. Approximately 30% of shoppers obtained prescriptions in multiple states. Conclusions The correlation between prevalence of doctor shoppers and opioid patients in a county could indicate either that easy access to legitimate medical treatment raises the risk of abuse or that drug abusers take advantage of greater opportunities in places where access is easy. Approaches to preventing excessive use of different prescribers are discussed. PMID:25111716

  1. A reliable, low-cost picture archiving and communications system for small and medium veterinary practices built using open-source technology.

    PubMed

    Iotti, Bryan; Valazza, Alberto

    2014-10-01

    Picture Archiving and Communications Systems (PACS) are the most needed system in a modern hospital. As an integral part of the Digital Imaging and Communications in Medicine (DICOM) standard, they are charged with the responsibility for secure storage and accessibility of the diagnostic imaging data. These machines need to offer high performance, stability, and security while proving reliable and ergonomic in the day-to-day and long-term storage and retrieval of the data they safeguard. This paper reports the experience of the authors in developing and installing a compact and low-cost solution based on open-source technologies in the Veterinary Teaching Hospital for the University of Torino, Italy, during the course of the summer of 2012. The PACS server was built on low-cost x86-based hardware and uses an open source operating system derived from Oracle OpenSolaris (Oracle Corporation, Redwood City, CA, USA) to host the DCM4CHEE PACS DICOM server (DCM4CHEE, http://www.dcm4che.org ). This solution features very high data security and an ergonomic interface to provide easy access to a large amount of imaging data. The system has been in active use for almost 2 years now and has proven to be a scalable, cost-effective solution for practices ranging from small to very large, where the use of different hardware combinations allows scaling to the different deployments, while the use of paravirtualization allows increased security and easy migrations and upgrades.

  2. Dexterity: A MATLAB-based analysis software suite for processing and visualizing data from tasks that measure arm or forelimb function.

    PubMed

    Butensky, Samuel D; Sloan, Andrew P; Meyers, Eric; Carmel, Jason B

    2017-07-15

    Hand function is critical for independence, and neurological injury often impairs dexterity. To measure hand function in people or forelimb function in animals, sensors are employed to quantify manipulation. These sensors make assessment easier and more quantitative and allow automation of these tasks. While automated tasks improve objectivity and throughput, they also produce large amounts of data that can be burdensome to analyze. We created software called Dexterity that simplifies data analysis of automated reaching tasks. Dexterity is MATLAB software that enables quick analysis of data from forelimb tasks. Through a graphical user interface, files are loaded and data are identified and analyzed. These data can be annotated or graphed directly. Analysis is saved, and the graph and corresponding data can be exported. For additional analysis, Dexterity provides access to custom scripts created by other users. To determine the utility of Dexterity, we performed a study to evaluate the effects of task difficulty on the degree of impairment after injury. Dexterity analyzed two months of data and allowed new users to annotate the experiment, visualize results, and save and export data easily. Previous analysis of tasks was performed with custom data analysis, requiring expertise with analysis software. Dexterity made the tools required to analyze, visualize and annotate data easy to use by investigators without data science experience. Dexterity increases accessibility to automated tasks that measure dexterity by making analysis of large data intuitive, robust, and efficient. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. FUn: a framework for interactive visualizations of large, high-dimensional datasets on the web.

    PubMed

    Probst, Daniel; Reymond, Jean-Louis

    2018-04-15

    During the past decade, big data have become a major tool in scientific endeavors. Although statistical methods and algorithms are well-suited for analyzing and summarizing enormous amounts of data, the results do not allow for a visual inspection of the entire data. Current scientific software, including R packages and Python libraries such as ggplot2, matplotlib and plot.ly, do not support interactive visualizations of datasets exceeding 100 000 data points on the web. Other solutions enable the web-based visualization of big data only through data reduction or statistical representations. However, recent hardware developments, especially advancements in graphical processing units, allow for the rendering of millions of data points on a wide range of consumer hardware such as laptops, tablets and mobile phones. Similar to the challenges and opportunities brought to virtually every scientific field by big data, both the visualization of and interaction with copious amounts of data are both demanding and hold great promise. Here we present FUn, a framework consisting of a client (Faerun) and server (Underdark) module, facilitating the creation of web-based, interactive 3D visualizations of large datasets, enabling record level visual inspection. We also introduce a reference implementation providing access to SureChEMBL, a database containing patent information on more than 17 million chemical compounds. The source code and the most recent builds of Faerun and Underdark, Lore.js and the data preprocessing toolchain used in the reference implementation, are available on the project website (http://doc.gdb.tools/fun/). daniel.probst@dcb.unibe.ch or jean-louis.reymond@dcb.unibe.ch.

  4. Addressing Open Water Data Challenges in the Bureau of Reclamation

    NASA Astrophysics Data System (ADS)

    Brekke, L. D.; Danner, A.; Nagode, J.; Rocha, J.; Poulton, S.; Anderson, A.

    2017-12-01

    The Bureau of Reclamation is largest wholesaler of water in the United States. Located in the 17 western states, Reclamation serves water to 31 million people, provides irrigated water to 20 percent of Western farmers, and is the second largest producer of hydroelectric power in the United States. Through these activities, Reclamation generates large amounts of water and water-related data, describing reservoirs and river system conditions, hydropower, environmental compliance activities, infrastructure assets, and other aspects of Reclamation's mission activities. Reclamation aims to make water and water-related data sets more easily found, accessed, and used in decision-making activities in order to benefit the public, private sector, and research communities. Historically, there has not been an integrated, bureau-wide system to store data in machine-readable formats; nor a system to permit centralized browsing, open access, and web-services. Reclamation began addressing these limitations by developing the Reclamation Water Information System (RWIS), released in Spring 2017 (https://water.usbr.gov/). A bureau-wide team contributed to RWIS development, including water data stewards, database administrators, and information technology (IT) specialists. The first RWIS release publishes reservoir time series data from Reclamation's five regions and includes a map interface for sites identification, a query interface for data discovery and access, and web-services for automated retrieval. As RWIS enhancement continues, the development team is developing a companion system - the Reclamation Information Sharing Environment (RISE) - to provide access to the other data subjects and types (geospatial, documents). While RWIS and RISE are promising starts, Reclamation continues to face challenges in addressing open water data goals: making data consolidation and open publishing a value-added activity for programs that publish data locally, going beyond providing open access to also providing decision-support, and scaling up IT solutions for future success - where Reclamation programs increasingly elect to more and more data through RWIS/RISE, thereby creating a big data challenge. This presentation will highlight activities status, lessons learned, and future directions.

  5. The relation between community bans of self-service tobacco displays and store environment and between tobacco accessibility and merchant incentives.

    PubMed

    Lee, R E; Feighery, E C; Schleicher, N C; Halvorson, S

    2001-12-01

    These studies investigated (1) the effect of community bans of self-service tobacco displays on store environment and (2) the effect of consumer tobacco accessibility on merchants. We counted cigarette displays (self-service, clerk-assisted, clear acrylic case) in 586 California stores. Merchant interviews (N = 198) identified consumer tobacco accessibility, tobacco company incentives, and shoplifting. Stores in communities with self-service tobacco display bans had fewer self-service displays and more acrylic displays but an equal total number of displays. The merchants who limited consumer tobacco accessibility received fewer incentives and reported lower shoplifting losses. In contrast, consumer access to tobacco was unrelated to the amount of monetary incentives. Community bans decreased self-service tobacco displays; however, exposure to tobacco advertising in acrylic displays remained high. Reducing consumer tobacco accessibility may reduce shoplifting.

  6. Geospatial Data Management Platform for Urban Groundwater

    NASA Astrophysics Data System (ADS)

    Gaitanaru, D.; Priceputu, A.; Gogu, C. R.

    2012-04-01

    Due to the large amount of civil work projects and research studies, large quantities of geo-data are produced for the urban environments. These data are usually redundant as well as they are spread in different institutions or private companies. Time consuming operations like data processing and information harmonisation represents the main reason to systematically avoid the re-use of data. The urban groundwater data shows the same complex situation. The underground structures (subway lines, deep foundations, underground parkings, and others), the urban facility networks (sewer systems, water supply networks, heating conduits, etc), the drainage systems, the surface water works and many others modify continuously. As consequence, their influence on groundwater changes systematically. However, these activities provide a large quantity of data, aquifers modelling and then behaviour prediction can be done using monitored quantitative and qualitative parameters. Due to the rapid evolution of technology in the past few years, transferring large amounts of information through internet has now become a feasible solution for sharing geoscience data. Furthermore, standard platform-independent means to do this have been developed (specific mark-up languages like: GML, GeoSciML, WaterML, GWML, CityML). They allow easily large geospatial databases updating and sharing through internet, even between different companies or between research centres that do not necessarily use the same database structures. For Bucharest City (Romania) an integrated platform for groundwater geospatial data management is developed under the framework of a national research project - "Sedimentary media modeling platform for groundwater management in urban areas" (SIMPA) financed by the National Authority for Scientific Research of Romania. The platform architecture is based on three components: a geospatial database, a desktop application (a complex set of hydrogeological and geological analysis tools) and a front-end geoportal service. The SIMPA platform makes use of mark-up transfer standards to provide a user-friendly application that can be accessed through internet to query, analyse, and visualise geospatial data related to urban groundwater. The platform holds the information within the local groundwater geospatial databases and the user is able to access this data through a geoportal service. The database architecture allows storing accurate and very detailed geological, hydrogeological, and infrastructure information that can be straightforwardly generalized and further upscaled. The geoportal service offers the possibility of querying a dataset from the spatial database. The query is coded in a standard mark-up language, and sent to the server through a standard Hyper Text Transfer Protocol (http) to be processed by the local application. After the validation of the query, the results are sent back to the user to be displayed by the geoportal application. The main advantage of the SIMPA platform is that it offers to the user the possibility to make a primary multi-criteria query, which results in a smaller set of data to be analysed afterwards. This improves both the transfer process parameters and the user's means of creating the desired query.

  7. Digital Scholarship and Open Access

    ERIC Educational Resources Information Center

    Losoff, Barbara; Pence, Harry E.

    2010-01-01

    Open access publications provide scholars with unrestricted access to the "conversation" that is the basis for the advancement of knowledge. The large number of open access journals, archives, and depositories already in existence demonstrates the technical and economic viability of providing unrestricted access to the literature that is the…

  8. Knowledge sharing and collaboration in translational research, and the DC-THERA Directory

    PubMed Central

    Gündel, Michaela; Austyn, Jonathan M.; Cavalieri, Duccio; Scognamiglio, Ciro; Brandizi, Marco

    2011-01-01

    Biomedical research relies increasingly on large collections of data sets and knowledge whose generation, representation and analysis often require large collaborative and interdisciplinary efforts. This dimension of ‘big data’ research calls for the development of computational tools to manage such a vast amount of data, as well as tools that can improve communication and access to information from collaborating researchers and from the wider community. Whenever research projects have a defined temporal scope, an additional issue of data management arises, namely how the knowledge generated within the project can be made available beyond its boundaries and life-time. DC-THERA is a European ‘Network of Excellence’ (NoE) that spawned a very large collaborative and interdisciplinary research community, focusing on the development of novel immunotherapies derived from fundamental research in dendritic cell immunobiology. In this article we introduce the DC-THERA Directory, which is an information system designed to support knowledge management for this research community and beyond. We present how the use of metadata and Semantic Web technologies can effectively help to organize the knowledge generated by modern collaborative research, how these technologies can enable effective data management solutions during and beyond the project lifecycle, and how resources such as the DC-THERA Directory fit into the larger context of e-science. PMID:21969471

  9. Analyzing large-scale proteomics projects with latent semantic indexing.

    PubMed

    Klie, Sebastian; Martens, Lennart; Vizcaíno, Juan Antonio; Côté, Richard; Jones, Phil; Apweiler, Rolf; Hinneburg, Alexander; Hermjakob, Henning

    2008-01-01

    Since the advent of public data repositories for proteomics data, readily accessible results from high-throughput experiments have been accumulating steadily. Several large-scale projects in particular have contributed substantially to the amount of identifications available to the community. Despite the considerable body of information amassed, very few successful analyses have been performed and published on this data, leveling off the ultimate value of these projects far below their potential. A prominent reason published proteomics data is seldom reanalyzed lies in the heterogeneous nature of the original sample collection and the subsequent data recording and processing. To illustrate that at least part of this heterogeneity can be compensated for, we here apply a latent semantic analysis to the data contributed by the Human Proteome Organization's Plasma Proteome Project (HUPO PPP). Interestingly, despite the broad spectrum of instruments and methodologies applied in the HUPO PPP, our analysis reveals several obvious patterns that can be used to formulate concrete recommendations for optimizing proteomics project planning as well as the choice of technologies used in future experiments. It is clear from these results that the analysis of large bodies of publicly available proteomics data by noise-tolerant algorithms such as the latent semantic analysis holds great promise and is currently underexploited.

  10. Global health equity in United Kingdom university research: a landscape of current policies and practices.

    PubMed

    Gotham, Dzintars; Meldrum, Jonathan; Nageshwaran, Vaitehi; Counts, Christopher; Kumari, Nina; Martin, Manuel; Beattie, Ben; Post, Nathan

    2016-10-10

    Universities are significant contributors to research and technologies in health; however, the health needs of the world's poor are historically neglected in research. Medical discoveries are frequently licensed exclusively to one producer, allowing a monopoly and inequitable pricing. Similarly, research is often published in ways that make it inaccessible. Universities can adopt policies and practices to overcome neglect and ensure equitable access to research and its products. For 25 United Kingdom universities, data on health research funding were extracted from the top five United Kingdom funders' databases and coded as research on neglected diseases (NDs) and/or health in low- and lower-middle-income countries (hLLMIC). Data on intellectual property licensing policies and practices and open-access policies were obtained from publicly available sources and by direct contact with universities. Proportions of research articles published as open-access were extracted from PubMed and PubMed Central. Across United Kingdom universities, the median proportion of 2011-2014 health research funds attributable to ND research was 2.6% and for hLLMIC it was 1.7%. Overall, 79% of all ND funding and 74% of hLLMIC funding were granted to the top four institutions within each category. Seven institutions had policies to ensure that technologies developed from their research are affordable globally. Mostly, universities licensed their inventions to third parties in a way that confers monopoly rights. Fifteen institutions had an institutional open-access publishing policy; three had an institutional open-access publishing fund. The proportion of health-related articles with full-text versions freely available online ranged from 58% to 100% across universities (2012-2013); 23% of articles also had a creative commons CC-BY license. There is wide variation in the amount of global health research undertaken by United Kingdom universities, with a large proportion of total research funding awarded to a few institutions. To meet a level of research commitment in line with the global burden of disease, most universities should seek to expand their research activity. Most universities do not license their intellectual property in a way that is likely to encourage access in resource-poor settings, and lack policies to do so. The majority of recent research publications are published open-access, but not as gold standard (CC-BY) open-access.

  11. 12 CFR 625.29 - Payment of award.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FCA will pay the amount awarded to the applicant within 60 days of receipt of the applicant's... EXPENSES UNDER THE EQUAL ACCESS TO JUSTICE ACT Procedures for Considering Applications § 625.29 Payment of...

  12. 50 CFR Table 1c to Part 660... - 2009, Open Access and Limited Entry Allocations by Species or Species Group (weights in metric tons)

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... OY of 135,939 mt. The tribal set aside is 50,000 mt. The amount estimated to be taken as research... of 86.4 percent. The OY is reduced by 2.0 mt for the amount anticipated to be taken during research... population in the California Current ecosystem, a non-quantitative assessment was conducted in 2007. The...

  13. Large-area Overhead Manipulator for Access of Fields

    USDA-ARS?s Scientific Manuscript database

    Multi-axis, cable-driven manipulators have evolved over many years providing large area suspended platform access, programmability, relatively rigid and flexibly-positioned platform control and full six degree of freedom (DOF) manipulation of sensors and tools. We describe innovations for a new six...

  14. Method and apparatus for managing access to a memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik

    A method and apparatus for managing access to a memory of a computing system. A controller transforms a plurality of operations that represent a computing job into an operational memory layout that reduces a size of a selected portion of the memory that needs to be accessed to perform the computing job. The controller stores the operational memory layout in a plurality of memory cells within the selected portion of the memory. The controller controls a sequence by which a processor in the computing system accesses the memory to perform the computing job using the operational memory layout. The operationalmore » memory layout reduces an amount of energy consumed by the processor to perform the computing job.« less

  15. Gpu Implementation of a Viscous Flow Solver on Unstructured Grids

    NASA Astrophysics Data System (ADS)

    Xu, Tianhao; Chen, Long

    2016-06-01

    Graphics processing units have gained popularities in scientific computing over past several years due to their outstanding parallel computing capability. Computational fluid dynamics applications involve large amounts of calculations, therefore a latest GPU card is preferable of which the peak computing performance and memory bandwidth are much better than a contemporary high-end CPU. We herein focus on the detailed implementation of our GPU targeting Reynolds-averaged Navier-Stokes equations solver based on finite-volume method. The solver employs a vertex-centered scheme on unstructured grids for the sake of being capable of handling complex topologies. Multiple optimizations are carried out to improve the memory accessing performance and kernel utilization. Both steady and unsteady flow simulation cases are carried out using explicit Runge-Kutta scheme. The solver with GPU acceleration in this paper is demonstrated to have competitive advantages over the CPU targeting one.

  16. Face classification using electronic synapses

    NASA Astrophysics Data System (ADS)

    Yao, Peng; Wu, Huaqiang; Gao, Bin; Eryilmaz, Sukru Burc; Huang, Xueyao; Zhang, Wenqiang; Zhang, Qingtian; Deng, Ning; Shi, Luping; Wong, H.-S. Philip; Qian, He

    2017-05-01

    Conventional hardware platforms consume huge amount of energy for cognitive learning due to the data movement between the processor and the off-chip memory. Brain-inspired device technologies using analogue weight storage allow to complete cognitive tasks more efficiently. Here we present an analogue non-volatile resistive memory (an electronic synapse) with foundry friendly materials. The device shows bidirectional continuous weight modulation behaviour. Grey-scale face classification is experimentally demonstrated using an integrated 1024-cell array with parallel online training. The energy consumption within the analogue synapses for each iteration is 1,000 × (20 ×) lower compared to an implementation using Intel Xeon Phi processor with off-chip memory (with hypothetical on-chip digital resistive random access memory). The accuracy on test sets is close to the result using a central processing unit. These experimental results consolidate the feasibility of analogue synaptic array and pave the way toward building an energy efficient and large-scale neuromorphic system.

  17. Canonical Visual Size for Real-World Objects

    PubMed Central

    Konkle, Talia; Oliva, Aude

    2012-01-01

    Real-world objects can be viewed at a range of distances and thus can be experienced at a range of visual angles within the visual field. Given the large amount of visual size variation possible when observing objects, we examined how internal object representations represent visual size information. In a series of experiments which required observers to access existing object knowledge, we observed that real-world objects have a consistent visual size at which they are drawn, imagined, and preferentially viewed. Importantly, this visual size is proportional to the logarithm of the assumed size of the object in the world, and is best characterized not as a fixed visual angle, but by the ratio of the object and the frame of space around it. Akin to the previous literature on canonical perspective, we term this consistent visual size information the canonical visual size. PMID:20822298

  18. Nanostructured porous Si-based nanoparticles for targeted drug delivery

    PubMed Central

    Shahbazi, Mohammad-Ali; Herranz, Barbara; Santos, Hélder A.

    2012-01-01

    One of the backbones in nanomedicine is to deliver drugs specifically to unhealthy cells. Drug nanocarriers can cross physiological barriers and access different tissues, which after proper surface biofunctionalization can enhance cell specificity for cancer therapy. Recent developments have highlighted the potential of mesoporous silica (PSiO2) and silicon (PSi) nanoparticles for targeted drug delivery. In this review, we outline and discuss the most recent advances on the applications and developments of cancer therapies by means of PSiO2 and PSi nanomaterials. Bio-engineering and fine tuning of anti-cancer drug vehicles, high flexibility and potential for sophisticated release mechanisms make these nanostructures promising candidates for “smart” cancer therapies. As a result of their physicochemical properties they can be controllably loaded with large amounts of drugs and coupled to homing molecules to facilitate active targeting. The main emphasis of this review will be on the in vitro and in vivo studies. PMID:23507894

  19. Judgement heuristics and bias in evidence interpretation: The effects of computer generated exhibits.

    PubMed

    Norris, Gareth

    2015-01-01

    The increasing use of multi-media applications, trial presentation software and computer generated exhibits (CGE) has raised questions as to the potential impact of the use of presentation technology on juror decision making. A significant amount of the commentary on the manner in which CGE exerts legal influence is largely anecdotal; empirical examinations too are often devoid of established theoretical rationalisations. This paper will examine a range of established judgement heuristics (for example, the attribution error, representativeness, simulation), in order to establish their appropriate application for comprehending legal decisions. Analysis of both past cases and empirical studies will highlight the potential for heuristics and biases to be restricted or confounded by the use of CGE. The paper will conclude with some wider discussion on admissibility, access to justice, and emerging issues in the use of multi-media in court. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Improving collected rainwater quality in rural communities.

    PubMed

    Garrido, S; Aviles, M; Ramirez, A; Gonzalez, A; Montellano, L; Gonzalez, B; de la Paz, J; Ramirez, R M

    2011-01-01

    The country of Mexico is facing serious problems with water quality and supply for human use and consumption in rural communities, mainly due to topographic and isolation. In Mexico the average annual precipitation is 1,500 cubic kilometers of water, if 3% of that amount were used, 13 million Mexicans could be supplied with drinking water that they currently do not have access. Considering the limited infrastructure and management in rural communities, which do not receive services from the centralized systems of large cities, a modified pilot multi-stage filtration (MMSF) system was designed, developed, and evaluated for treating collected rainwater in three rural communities, Ajuchitlan and Villa Nicolas Zapata (Morelos State) and Xacxamayo (Puebla State). The efficiencies obtained in the treatment system were: colour and turbidity >93%. It is worth mentioning that the water obtained for human use and consumption complies with the Mexican Standard NOM-127-SSA1-1994.

  1. HIPAA for physicians in the information age.

    PubMed

    Kavoussi, Shaheen C; Huang, John J; Tsai, James C; Kempton, James E

    2014-08-01

    The increased prominence of electronic health records, email, mobile devices, and social media has transformed the health care environment by providing both physicians and patients with opportunities for rapid communication and knowledge exchange. However, these technological advances require increased attention to patient privacy under the Health Insurance Portability and Accountability Act (HIPAA). Instant access to large amounts of electronic protected health information (PHI) merits the highest standard of network security and HIPAA training for all staff members. Physicians are responsible for protecting PHI stored on portable devices. Personal, residential, and public wireless connections are not certified with HIPAA-compliant Business Associate Agreements and are unsuitablefor PHI. A professional and privacy-oriented approach to electronic communication, online activity, and social media is imperative to maintaining public trust in physician integrity. As new technologies are integrated into health care practice, the assurance of privacy will encourage patients to continue to seek medical care.

  2. Using immunoglobulin Y as an alternative antibody for the detection of hepatitis A virus in frozen liver sections.

    PubMed

    Bentes, Gentil Arthur; Lanzarini, Natália Maria; Lima, Lyana Rodrigues Pinto; Manso, Pedro Paulo de Abreu; da Silva, Alexandre Dos Santos; Mouta Junior, Sergio da Silva E; Guimarães, Juliana Rodrigues; de Moraes, Marcia Terezinha Baroni; Pelajo-Machado, Marcelo; Pinto, Marcelo Alves

    2015-06-01

    An increasing amount of research has been conducted on immunoglobulin Y (IgY) because the use of IgY offers several advantages with respect to diagnostic testing, including its easy accessibility, low cost and translatability to large-scale production, in addition to the fact that it can be ethically produced. In a previous work, immunoglobulin was produced and purified from egg yolks (IgY) reactive to hepatitis A virus (HAV) antigens. In the present work, this anti-HAV-specific IgY was used in an indirect immunofluorescence assay to detect viral antigens in liver biopsies that were obtained from experimentally infected cynomolgus monkeys. Fields that were positive for HAV antigen were detected in liver sections using confocal microscopy. In conclusion, egg yolks from immunised hens may be a reliable source for antibody production, which can be employed for immunological studies.

  3. Direct depth distribution measurement of deuterium in bulk tungsten exposed to high-flux plasma

    DOE PAGES

    Taylor, Chase N.; Shimada, M.

    2017-05-08

    Understanding tritium retention and permeation in plasma-facing components is critical for fusion safety and fuel cycle control. Glow discharge optical emission spectroscopy (GD-OES) is shown to be an effective tool to reveal the depth profile of deuterium in tungsten. Results confirm the detection of deuterium. Furthermore, a ~46 µm depth profile revealed that the deuterium content decreased precipitously in the first 7 µm, and detectable amounts were observed to depths in excess of 20 µm. The large probing depth of GD-OES (up to 100s of µm) enables studies not previously accessible to the more conventional techniques for investigating deuterium retention.more » Of particular applicability is the use of GD-OES to measure the depth profile for experiments where high diffusion is expected: deuterium retention in neutron irradiated materials, and ultra-high deuterium fluences in burning plasma environment.« less

  4. Vitesses de glissement à long terme et dislocations cosismiques caractéristiques : clés du fonctionnement des failles actives et de l'aléa sismique

    NASA Astrophysics Data System (ADS)

    Tapponnier, Paul; Ryerson, Frederick James; Van der Woerd, Jerome; Mériaux, Anne-Sophie; Lasserre, Cécile

    2001-11-01

    Over periods of thousands of years, active faults tend to slip at constant rates. Pioneer studies of large Asian faults show that cosmogenic radionuclides ( 10Be, 26Al) provide an unparalleled tool to date surface features, whose offsets yield the longest records of recent cumulative movement. The technique is thus uniquely suited to determine long-term (10-100 ka) slip rates. Such rates, combined with coseismic slip-amounts, can give access to recurrence times of earthquakes of similar sizes. Landform dating - morphochronology - is therefore essential to understand fault-behaviour, evaluate seismic hazard, and build physical earthquake models. It is irreplaceable because long-term slip-rates on interacting faults need not coincide with GPS-derived, interseismic rates, and can be difficult to obtain from paleo-seismological trenching.

  5. Tips and tricks for using the internet for professional purposes.

    PubMed

    Ceylan, Hasan Huseyin; Güngören, Nurdan; Küçükdurmaz, Fatih

    2017-05-01

    Online resources provide access to large amounts of information which is expanding every day. Using search engines for reaching the relevant, updated and complete literature that is indexed in various bibliographical databases has already become part of the medical professionals' everyday life.However, most researchers often fail to conduct a efficient literature search on the internet. The right techniques in literature search save time and improve the quality of the retrieved data.Efficient literature search is not a talent but a learnable skill, which should be a formal part of medical education.This review briefly outlines the commonly used bibliographic databases, namely Pubmed, Cochrane Library, Web of Science, Scopus, EMBASE, CINAHL and Google Scholar. Also the definition of grey literature and its features are summarised. Cite this article: EFORT Open Rev 2017;2. DOI: 10.1302/2058-5241.2.160066. Originally published online at www.efortopenreviews.org.

  6. T.I.M.S: TaqMan Information Management System, tools to organize data flow in a genotyping laboratory

    PubMed Central

    Monnier, Stéphanie; Cox, David G; Albion, Tim; Canzian, Federico

    2005-01-01

    Background Single Nucleotide Polymorphism (SNP) genotyping is a major activity in biomedical research. The Taqman technology is one of the most commonly used approaches. It produces large amounts of data that are difficult to process by hand. Laboratories not equipped with a Laboratory Information Management System (LIMS) need tools to organize the data flow. Results We propose a package of Visual Basic programs focused on sample management and on the parsing of input and output TaqMan files. The code is written in Visual Basic, embedded in the Microsoft Office package, and it allows anyone to have access to those tools, without any programming skills and with basic computer requirements. Conclusion We have created useful tools focused on management of TaqMan genotyping data, a critical issue in genotyping laboratories whithout a more sophisticated and expensive system, such as a LIMS. PMID:16221298

  7. Thermal systems design and analysis for a 10 K Sorption Cryocooler flight experiment

    NASA Technical Reports Server (NTRS)

    Bhandari, Pradeep; Bard, Steven

    1993-01-01

    The design, analysis and predicted performance of the Brilliant Eyes Ten-Kelvin Sorption Cryocooler Experiment (BETSCE) is described from a thermal perspective. BETSCE is a shuttle side-wall mounted cryogenic technology demonstration experiment planned for launch in November 1994. BETSCE uses a significant amount of power (about 500 W peak) and the resultant heat must be rejected passively with radiators, as BETSCE has no access to the active cooling capability of the shuttle. It was a major challenge to design and configure the individual hardware assemblies, with their relatively large radiators, to enable them to reject their heat while satisfying numerous severe shuttle-imposed constraints. This paper is a useful case study of a small shuttle payload that needs to reject relatively high heat loads passively in a highly constrained thermal environment. The design approach described is consistent with today's era of 'faster, better, cheaper' small-scale space missions.

  8. Face classification using electronic synapses.

    PubMed

    Yao, Peng; Wu, Huaqiang; Gao, Bin; Eryilmaz, Sukru Burc; Huang, Xueyao; Zhang, Wenqiang; Zhang, Qingtian; Deng, Ning; Shi, Luping; Wong, H-S Philip; Qian, He

    2017-05-12

    Conventional hardware platforms consume huge amount of energy for cognitive learning due to the data movement between the processor and the off-chip memory. Brain-inspired device technologies using analogue weight storage allow to complete cognitive tasks more efficiently. Here we present an analogue non-volatile resistive memory (an electronic synapse) with foundry friendly materials. The device shows bidirectional continuous weight modulation behaviour. Grey-scale face classification is experimentally demonstrated using an integrated 1024-cell array with parallel online training. The energy consumption within the analogue synapses for each iteration is 1,000 × (20 ×) lower compared to an implementation using Intel Xeon Phi processor with off-chip memory (with hypothetical on-chip digital resistive random access memory). The accuracy on test sets is close to the result using a central processing unit. These experimental results consolidate the feasibility of analogue synaptic array and pave the way toward building an energy efficient and large-scale neuromorphic system.

  9. Direct depth distribution measurement of deuterium in bulk tungsten exposed to high-flux plasma

    NASA Astrophysics Data System (ADS)

    Taylor, C. N.; Shimada, M.

    2017-05-01

    Understanding tritium retention and permeation in plasma-facing components is critical for fusion safety and fuel cycle control. Glow discharge optical emission spectroscopy (GD-OES) is shown to be an effective tool to reveal the depth profile of deuterium in tungsten. Results confirm the detection of deuterium. A ˜46 μm depth profile revealed that the deuterium content decreased precipitously in the first 7 μm, and detectable amounts were observed to depths in excess of 20 μm. The large probing depth of GD-OES (up to 100s of μm) enables studies not previously accessible to the more conventional techniques for investigating deuterium retention. Of particular applicability is the use of GD-OES to measure the depth profile for experiments where high deuterium concentration in the bulk material is expected: deuterium retention in neutron irradiated materials, and ultra-high deuterium fluences in burning plasma environment.

  10. Animal models of binge drinking, current challenges to improve face validity.

    PubMed

    Jeanblanc, Jérôme; Rolland, Benjamin; Gierski, Fabien; Martinetti, Margaret P; Naassila, Mickael

    2018-05-05

    Binge drinking (BD), i.e., consuming a large amount of alcohol in a short period of time, is an increasing public health issue. Though no clear definition has been adopted worldwide the speed of drinking seems to be a keystone of this behavior. Developing relevant animal models of BD is a priority for gaining a better characterization of the neurobiological and psychobiological mechanisms underlying this dangerous and harmful behavior. Until recently, preclinical research on BD has been conducted mostly using forced administration of alcohol, but more recent studies used scheduled access to alcohol, to model more voluntary excessive intakes, and to achieve signs of intoxications that mimic the human behavior. The main challenges for future research are discussed regarding the need of good face validity, construct validity and predictive validity of animal models of BD. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Filamentary model in resistive switching materials

    NASA Astrophysics Data System (ADS)

    Jasmin, Alladin C.

    2017-12-01

    The need for next generation computer devices is increasing as the demand for efficient data processing increases. The amount of data generated every second also increases which requires large data storage devices. Oxide-based memory devices are being studied to explore new research frontiers thanks to modern advances in nanofabrication. Various oxide materials are studied as active layers for non-volatile memory. This technology has potential application in resistive random-access-memory (ReRAM) and can be easily integrated in CMOS technologies. The long term perspective of this research field is to develop devices which mimic how the brain processes information. To realize such application, a thorough understanding of the charge transport and switching mechanism is important. A new perspective in the multistate resistive switching based on current-induced filament dynamics will be discussed. A simple equivalent circuit of the device gives quantitative information about the nature of the conducting filament at different resistance states.

  12. Travelogue of Konrad Keilhack (1858-1944), Geologist from Berlin, attending the International Geological Congress 1897 in St. Petersburg (Russia)

    NASA Astrophysics Data System (ADS)

    Pfaffl, Fritz A.; Dullo, Wolf-Christian

    2015-09-01

    Keilhack reported his impressions from his participation at the International Geological Congress in Russia in 1897 in several consecutive articles. In the more than 100 years since that time, a lot has changed. Apart from the totally different style of scientific presentations, with almost no illustrations, except maps, being shown during a talk, field trips were also a very special event, involving huge amounts of logistics. More than 200 people were transported to very remote areas of the European part of Russia. As well as organizing transportation by coaches and horses, places to stay overnight had to be found in large numbers and special regulations had to be issued by the government to allow access to various outcrops. Keilhacks visit of the oil-producing sites around Baku are of special interest, since they belonged obviously to the most productive ones on the globe at that time.

  13. BOLDMirror: a global mirror system of DNA barcode data.

    PubMed

    Liu, D; Liu, L; Guo, G; Wang, W; Sun, Q; Parani, M; Ma, J

    2013-11-01

    DNA barcoding is a novel concept for taxonomic identification using short, specific genetic markers and has been applied to study a large number of eukaryotes. The huge amount of data output generated by DNA barcoding requires well-organized information systems. Besides the Barcode of Life Data system (BOLD) established in Canada, the mirror system is also important for the international barcode of life project (iBOL). For this purpose, we developed the BOLDMirror, a global mirror system of DNA barcode data. It is open-sourced and can run on the LAMP (Linux + Apache + MySQL + PHP) environment. BOLDMirror has data synchronization, data representation and statistics modules, and also provides spaces to store user operation history. BOLDMirror can be accessed at http://www.boldmirror.net and several countries have used it to setup their site of DNA barcoding. © 2012 John Wiley & Sons Ltd.

  14. Real-time depth processing for embedded platforms

    NASA Astrophysics Data System (ADS)

    Rahnama, Oscar; Makarov, Aleksej; Torr, Philip

    2017-05-01

    Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.

  15. Online Updating of Statistical Inference in the Big Data Setting.

    PubMed

    Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui

    2016-01-01

    We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.

  16. RE-PLAN: An Extensible Software Architecture to Facilitate Disaster Response Planning

    PubMed Central

    O’Neill, Martin; Mikler, Armin R.; Indrakanti, Saratchandra; Tiwari, Chetan; Jimenez, Tamara

    2014-01-01

    Computational tools are needed to make data-driven disaster mitigation planning accessible to planners and policymakers without the need for programming or GIS expertise. To address this problem, we have created modules to facilitate quantitative analyses pertinent to a variety of different disaster scenarios. These modules, which comprise the REsponse PLan ANalyzer (RE-PLAN) framework, may be used to create tools for specific disaster scenarios that allow planners to harness large amounts of disparate data and execute computational models through a point-and-click interface. Bio-E, a user-friendly tool built using this framework, was designed to develop and analyze the feasibility of ad hoc clinics for treating populations following a biological emergency event. In this article, the design and implementation of the RE-PLAN framework are described, and the functionality of the modules used in the Bio-E biological emergency mitigation tool are demonstrated. PMID:25419503

  17. Network Configuration of Oracle and Database Programming Using SQL

    NASA Technical Reports Server (NTRS)

    Davis, Melton; Abdurrashid, Jibril; Diaz, Philip; Harris, W. C.

    2000-01-01

    A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and interact with the database. The Oracle 8 Server is a state-of-the-art information management environment. It is a repository for very large amounts of data, and gives users rapid access to that data. The Oracle 8 Server allows for sharing of data between applications; the information is stored in one place and used by many systems. My research will focus primarily on SQL (Structured Query Language) programming. SQL is the way you define and manipulate data in Oracle's relational database. SQL is the industry standard adopted by all database vendors. When programming with SQL, you work on sets of data (i.e., information is not processed one record at a time).

  18. Tips and tricks for using the internet for professional purposes

    PubMed Central

    Ceylan, Hasan Huseyin; Güngören, Nurdan; Küçükdurmaz, Fatih

    2017-01-01

    Online resources provide access to large amounts of information which is expanding every day. Using search engines for reaching the relevant, updated and complete literature that is indexed in various bibliographical databases has already become part of the medical professionals’ everyday life. However, most researchers often fail to conduct a efficient literature search on the internet. The right techniques in literature search save time and improve the quality of the retrieved data. Efficient literature search is not a talent but a learnable skill, which should be a formal part of medical education. This review briefly outlines the commonly used bibliographic databases, namely Pubmed, Cochrane Library, Web of Science, Scopus, EMBASE, CINAHL and Google Scholar. Also the definition of grey literature and its features are summarised. Cite this article: EFORT Open Rev 2017;2. DOI: 10.1302/2058-5241.2.160066. Originally published online at www.efortopenreviews.org PMID:28630750

  19. Launching genomics into the cloud: deployment of Mercury, a next generation sequence analysis pipeline.

    PubMed

    Reid, Jeffrey G; Carroll, Andrew; Veeraraghavan, Narayanan; Dahdouli, Mahmoud; Sundquist, Andreas; English, Adam; Bainbridge, Matthew; White, Simon; Salerno, William; Buhay, Christian; Yu, Fuli; Muzny, Donna; Daly, Richard; Duyk, Geoff; Gibbs, Richard A; Boerwinkle, Eric

    2014-01-29

    Massively parallel DNA sequencing generates staggering amounts of data. Decreasing cost, increasing throughput, and improved annotation have expanded the diversity of genomics applications in research and clinical practice. This expanding scale creates analytical challenges: accommodating peak compute demand, coordinating secure access for multiple analysts, and sharing validated tools and results. To address these challenges, we have developed the Mercury analysis pipeline and deployed it in local hardware and the Amazon Web Services cloud via the DNAnexus platform. Mercury is an automated, flexible, and extensible analysis workflow that provides accurate and reproducible genomic results at scales ranging from individuals to large cohorts. By taking advantage of cloud computing and with Mercury implemented on the DNAnexus platform, we have demonstrated a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples.

  20. Solving problems of disclosure risk in an academic setting: using a combination of restricted data and restricted access methods.

    PubMed

    Rodgers, Willard; Nolte, Michael

    2006-09-01

    THE HEALTH AND RETIREMENT STUDY collects a vast amount of information about a sample of the U.S. population over age 50 from biennial interviews, supplemental questionnaires, and through linkages with administrative data including Social Security earnings and benefits records and Medicare claims records. To h onor i ts p ledge to t he r espondents that their data will be kept confidential, but at the same time meet its objective of providing useful data to researchers, it has develop ed procedures for stripping sensitive information (i.e., information that could facilitate re-identification of sample members) from data sets that are publicly released, and also for providing mechanisms for qualified researchers to gain access to a variety of restricted-access data files. These mechanisms include a procedure whereby highly qualified researchersin particular, only those who have a current grant from a federal agencycan apply to obtain restricted-access data sets for a limited amount of time, with the understanding that they will make no attempt to r e-identify s amp le m embers and t hat they w ill be audited to ensure that they have adhered to the agreedupon safeguards. For those who meet some but not all of the requirements for receiving these data, the files can be analyzed in a data enclave (a controlled, secure environment in which eligible researchers can perform analyses). This paper focuses on approaches to restricting data access that may need to be considered by investigators who plan to share their data, and by their institutional officials who will need to support that effort with appropriate infrastructure and policies. It also provides guidance to investigators and institutional review boards (IRBs) who seek access to restricted data generated and archived elsewhere.

  1. Temporal Evolution of Ion Spectral Structures During a Geomagnetic Storm: Observations and Modeling

    NASA Astrophysics Data System (ADS)

    Ferradas, C. P.; Zhang, J.-C.; Spence, H. E.; Kistler, L. M.; Larsen, B. A.; Reeves, G. D.; Skoug, R. M.; Funsten, H. O.

    2018-01-01

    Using the Van Allen Probes/Helium, Oxygen, Proton, and Electron mass spectrometer, we perform a case study of the temporal evolution of ion spectral structures observed in the energy range of 1 to 50 keV throughout the geomagnetic storm of 2 October 2013. The ion spectral features are observed near the inner edge of the plasma sheet and are signatures of fresh transport from the plasma sheet into the inner magnetosphere. We find that the characteristics of the ion structures are determined by the intensity of the convection electric field. Prior to the beginning of the storm, the plasma sheet inner edge exhibits narrow nose spectral structures that vary little in energy across L values. Ion access to the inner magnetosphere during these times is limited to the nose energy bands. As convection is enhanced and large amounts of plasma are injected from the plasma sheet during the main phase of the storm, ion access occurs at a wide energy range, as no nose structures are observed. As the magnetosphere recovers from the storm, single noses and then multiple noses are observed once again. We use a model of ion drift and losses due to charge exchange to simulate the ion spectra and gain insight into the main observed features.

  2. DeepBlue epigenomic data server: programmatic data retrieval and analysis of epigenome region sets

    PubMed Central

    Albrecht, Felipe; List, Markus; Bock, Christoph; Lengauer, Thomas

    2016-01-01

    Large amounts of epigenomic data are generated under the umbrella of the International Human Epigenome Consortium, which aims to establish 1000 reference epigenomes within the next few years. These data have the potential to unravel the complexity of epigenomic regulation. However, their effective use is hindered by the lack of flexible and easy-to-use methods for data retrieval. Extracting region sets of interest is a cumbersome task that involves several manual steps: identifying the relevant experiments, downloading the corresponding data files and filtering the region sets of interest. Here we present the DeepBlue Epigenomic Data Server, which streamlines epigenomic data analysis as well as software development. DeepBlue provides a comprehensive programmatic interface for finding, selecting, filtering, summarizing and downloading region sets. It contains data from four major epigenome projects, namely ENCODE, ROADMAP, BLUEPRINT and DEEP. DeepBlue comes with a user manual, examples and a well-documented application programming interface (API). The latter is accessed via the XML-RPC protocol supported by many programming languages. To demonstrate usage of the API and to enable convenient data retrieval for non-programmers, we offer an optional web interface. DeepBlue can be openly accessed at http://deepblue.mpi-inf.mpg.de. PMID:27084938

  3. Ursgal, Universal Python Module Combining Common Bottom-Up Proteomics Tools for Large-Scale Analysis.

    PubMed

    Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian

    2016-03-04

    Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem.

  4. Cloud-Enabled Climate Analytics-as-a-Service using Reanalysis data: A case study.

    NASA Astrophysics Data System (ADS)

    Nadeau, D.; Duffy, D.; Schnase, J. L.; McInerney, M.; Tamkin, G.; Potter, G. L.; Thompson, J. H.

    2014-12-01

    The NASA Center for Climate Simulation (NCCS) maintains advanced data capabilities and facilities that allow researchers to access the enormous volume of data generated by weather and climate models. The NASA Climate Model Data Service (CDS) and the NCCS are merging their efforts to provide Climate Analytics-as-a-Service for the comparative study of the major reanalysis projects: ECMWF ERA-Interim, NASA/GMAO MERRA, NOAA/NCEP CFSR, NOAA/ESRL 20CR, JMA JRA25, and JRA55. These reanalyses have been repackaged to netCDF4 file format following the CMIP5 Climate and Forecast (CF) metadata convention prior to be sequenced into the Hadoop Distributed File System ( HDFS ). A small set of operations that represent a common starting point in many analysis workflows was then created: min, max, sum, count, variance and average. In this example, Reanalysis data exploration was performed with the use of Hadoop MapReduce and accessibility was achieved using the Climate Data Service(CDS) application programming interface (API) created at NCCS. This API provides a uniform treatment of large amount of data. In this case study, we have limited our exploration to 2 variables, temperature and precipitation, using 3 operations, min, max and avg and using 30-year of Reanalysis data for 3 regions of the world: global, polar, subtropical.

  5. User-friendly solutions for microarray quality control and pre-processing on ArrayAnalysis.org

    PubMed Central

    Eijssen, Lars M. T.; Jaillard, Magali; Adriaens, Michiel E.; Gaj, Stan; de Groot, Philip J.; Müller, Michael; Evelo, Chris T.

    2013-01-01

    Quality control (QC) is crucial for any scientific method producing data. Applying adequate QC introduces new challenges in the genomics field where large amounts of data are produced with complex technologies. For DNA microarrays, specific algorithms for QC and pre-processing including normalization have been developed by the scientific community, especially for expression chips of the Affymetrix platform. Many of these have been implemented in the statistical scripting language R and are available from the Bioconductor repository. However, application is hampered by lack of integrative tools that can be used by users of any experience level. To fill this gap, we developed a freely available tool for QC and pre-processing of Affymetrix gene expression results, extending, integrating and harmonizing functionality of Bioconductor packages. The tool can be easily accessed through a wizard-like web portal at http://www.arrayanalysis.org or downloaded for local use in R. The portal provides extensive documentation, including user guides, interpretation help with real output illustrations and detailed technical documentation. It assists newcomers to the field in performing state-of-the-art QC and pre-processing while offering data analysts an integral open-source package. Providing the scientific community with this easily accessible tool will allow improving data quality and reuse and adoption of standards. PMID:23620278

  6. Temporal evolution of ion spectral structures during a geomagnetic storm: Observations and modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferradas Alva, Cristian Pablo; Zhang, J.-C.; Spence, H. E.

    Using the Van Allen Probes/Helium, Oxygen, Proton, and Electron (HOPE) mass spectrometer, we perform a case study of the temporal evolution of ion spectral structures observed in the energy range of 1- ~50 keV throughout the geomagnetic storm of 2 October 2013. The ion spectral features are observed near the inner edge of the plasma sheet and are signatures of fresh transport from the plasma sheet into the inner magnetosphere. We find that the characteristics of the ion structures are determined by the intensity of the convection electric field. Prior to the beginning of the storm, the plasma sheet innermore » edge exhibits narrow nose spectral structures that vary little in energy across L values. Ion access to the inner magnetosphere during these times is limited to the nose energy bands. As convection is enhanced and large amounts of plasma are injected from the plasma sheet during the main phase of the storm, ion access occurs at a wide energy range, as no nose structures are observed. Here, as the magnetosphere recovers from the storm, single noses and then multiple noses are observed once again. Lastly, we use a model of ion drift and losses due to charge exchange to simulate the ion spectra and gain insight into the main observed features.« less

  7. Temporal evolution of ion spectral structures during a geomagnetic storm: Observations and modeling

    DOE PAGES

    Ferradas Alva, Cristian Pablo; Zhang, J.-C.; Spence, H. E.; ...

    2017-12-13

    Using the Van Allen Probes/Helium, Oxygen, Proton, and Electron (HOPE) mass spectrometer, we perform a case study of the temporal evolution of ion spectral structures observed in the energy range of 1- ~50 keV throughout the geomagnetic storm of 2 October 2013. The ion spectral features are observed near the inner edge of the plasma sheet and are signatures of fresh transport from the plasma sheet into the inner magnetosphere. We find that the characteristics of the ion structures are determined by the intensity of the convection electric field. Prior to the beginning of the storm, the plasma sheet innermore » edge exhibits narrow nose spectral structures that vary little in energy across L values. Ion access to the inner magnetosphere during these times is limited to the nose energy bands. As convection is enhanced and large amounts of plasma are injected from the plasma sheet during the main phase of the storm, ion access occurs at a wide energy range, as no nose structures are observed. Here, as the magnetosphere recovers from the storm, single noses and then multiple noses are observed once again. Lastly, we use a model of ion drift and losses due to charge exchange to simulate the ion spectra and gain insight into the main observed features.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaygusuz, K.

    Energy generation and use are strongly linked to all elements of sustainable development such as economic, social, and environmental. The history of human development rests on the availability and use of energy, the transformation from the early use of fire and animal power that improved lives, to the present world with use of electricity and clean fuels for a multitude of purposes. Energy is the neglected issue of the development debate. The lack of access to reliable and clean energy supplies is a major barrier to improving human well-being around the globe. There are an estimated 1.6 billion people livingmore » in the rural areas of developing countries who lack access to electricity, and so dependence on fossil fuels. Combustion of fossil fuels produces large amounts of CO{sub 2}, an important greenhouse gas. In response to increasing concern about the effect of anthropogenic greenhouse gases on global climate, international action has been agreed to reduce these emissions. On the other hand, renewable energy is the great, barely tapped solution to the two great challenges of the coming century such as poverty and global warming. Not only can renewable energy provide a clean, flexible power source for homes, schools and hospitals, at the micro-to-medium scale it has huge potential to create meaningful and useful jobs.« less

  9. Recruitment of African American and Latino Adolescent Couples in Romantic Relationships: Lessons Learned

    PubMed Central

    Rivera, Angelic; Watnick, Dana; Bauman, Laurie J.

    2012-01-01

    Background There is considerable literature on effective engagement strategies for recruiting adolescents individually for health research studies, but literature on recruiting adolescent couples is new and minimal. Purpose This paper describes the recruitment strategies used for Teen Connections, a longitudinal study that recruited 139 mainly African American and Latino adolescent couples in romantic relationships living in New York City. Method We collected data in Microsoft Access and documented the date each recruitment strategy was implemented, date each partner was enrolled, and amount of effort required to enroll participants. We identified individual and relationship characteristics from each partner's baseline survey. Results We found that relationship type and characteristics, language used in printed materials, parental consent, implementing a screener questionnaire, and gender of partner had implications for enrollment in TC. Discussion Couples studies are highly demanding but achievable with dedicated staff and access to a large number of youth. Translation to Health Education Practice Research on sexual health and risk often relies on individual reports of dyadic events. Adolescent couples' studies may not be pursued because of recruitment limitations, but they can provide invaluable insight into relationship dynamics, characteristics etc. that may help design better health education interventions, and should be pursued nonetheless. PMID:23326814

  10. Challenging Oil Bioremediation at Deep-Sea Hydrostatic Pressure

    PubMed Central

    Scoma, Alberto; Yakimov, Michail M.; Boon, Nico

    2016-01-01

    The Deepwater Horizon accident has brought oil contamination of deep-sea environments to worldwide attention. The risk for new deep-sea spills is not expected to decrease in the future, as political pressure mounts to access deep-water fossil reserves, and poorly tested technologies are used to access oil. This also applies to the response to oil-contamination events, with bioremediation the only (bio)technology presently available to combat deep-sea spills. Many questions about the fate of petroleum-hydrocarbons within deep-sea environments remain unanswered, as well as the main constraints limiting bioremediation under increased hydrostatic pressures and low temperatures. The microbial pathways fueling oil bioassimilation are unclear, and the mild upregulation observed for beta-oxidation-related genes in both water and sediments contrasts with the high amount of alkanes present in the spilled oil. The fate of solid alkanes (tar), hydrocarbon degradation rates and the reason why the most predominant hydrocarbonoclastic genera were not enriched at deep-sea despite being present at hydrocarbon seeps at the Gulf of Mexico have been largely overlooked. This mini-review aims at highlighting the missing information in the field, proposing a holistic approach where in situ and ex situ studies are integrated to reveal the principal mechanisms accounting for deep-sea oil bioremediation. PMID:27536290

  11. Soil and water characteristics of a young surface mine wetland

    NASA Astrophysics Data System (ADS)

    Andrew Cole, C.; Lefebvre, Eugene A.

    1991-05-01

    Coal companies are reluctant to include wetland development in reclamation plans partly due to a lack of information on the resulting characteristics of such sites. It is easier for coal companies to recreate terrestrial habitats than to attempt experimental methods and possibly face significant regulatory disapproval. Therefore, we studied a young (10 years) wetland on a reclaimed surface coal mine in southern Illinois so as to ascertain soil and water characteristics such that the site might serve as a model for wetland development on surface mines. Water pH was not measured because of equipment problems, but evidence (plant life, fish, herpetofauna) suggests suitable pH levels. Other water parameters (conductivity, salinity, alkalinity, chloride, copper, total hardness, iron, manganese, nitrate, nitrite, phosphate, and sulfate) were measured, and only copper was seen in potentially high concentrations (but with no obvious toxic effects). Soil variables measured included pH, nitrate, nitrite, ammonia, potassium, calcium, magnesium, manganese, aluminum, iron, sulfate, chloride, and percent organic matter. Soils were slightly alkaline and most parameters fell within levels reported for other studies on both natural and manmade wetlands. Aluminum was high, but this might be indicative more of large amounts complexed with soils and therefore unavailable, than amounts actually accessible to plants. Organic matter was moderate, somewhat surprising given the age of the system.

  12. Data Curation for the Exploitation of Large Earth Observation Products Databases - The MEA system

    NASA Astrophysics Data System (ADS)

    Mantovani, Simone; Natali, Stefano; Barboni, Damiano; Cavicchi, Mario; Della Vecchia, Andrea

    2014-05-01

    National Space Agencies under the umbrella of the European Space Agency are performing a strong activity to handle and provide solutions to Big Data and related knowledge (metadata, software tools and services) management and exploitation. The continuously increasing amount of long-term and of historic data in EO facilities in the form of online datasets and archives, the incoming satellite observation platforms that will generate an impressive amount of new data and the new EU approach on the data distribution policy make necessary to address technologies for the long-term management of these data sets, including their consolidation, preservation, distribution, continuation and curation across multiple missions. The management of long EO data time series of continuing or historic missions - with more than 20 years of data available already today - requires technical solutions and technologies which differ considerably from the ones exploited by existing systems. Several tools, both open source and commercial, are already providing technologies to handle data and metadata preparation, access and visualization via OGC standard interfaces. This study aims at describing the Multi-sensor Evolution Analysis (MEA) system and the Data Curation concept as approached and implemented within the ASIM and EarthServer projects, funded by the European Space Agency and the European Commission, respectively.

  13. Human Milk Oligosaccharides (HMOS): Structure, Function, and Enzyme-Catalyzed Synthesis.

    PubMed

    Chen, Xi

    2015-01-01

    The important roles played by human milk oligosaccharides (HMOS), the third major component of human milk, in the health of breast-fed infants have been increasingly recognized, as the structures of more than 100 different HMOS have now been elucidated. Despite the recognition of the various functions of HMOS as prebiotics, antiadhesive antimicrobials, and immunomodulators, the roles and the applications of individual HMOS species are less clear. This is mainly due to the limited accessibility to large amounts of individual HMOS in their pure forms. Current advances in the development of enzymatic, chemoenzymatic, whole-cell, and living-cell systems allow for the production of a growing number of HMOS in increasing amounts. This effort will greatly facilitate the elucidation of the important roles of HMOS and allow exploration into the applications of HMOS both as individual compounds and as mixtures of defined structures with desired functions. The structures, functions, and enzyme-catalyzed synthesis of HMOS are briefly surveyed to provide a general picture about the current progress on these aspects. Future efforts should be devoted to elucidating the structures of more complex HMOS, synthesizing more complex HMOS including those with branched structures, and developing HMOS-based or HMOS-inspired prebiotics, additives, and therapeutics. © 2015 Elsevier Inc. All rights reserved.

  14. Impact of lignin polymer backbone esters on ionic liquid pretreatment of poplar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kwang Ho; Dutta, Tanmoy; Ralph, John

    Biomass pretreatment remains an essential step in lignocellulosic biofuel production, largely to facilitate the efficient removal of lignin and increase enzyme accessibility to the polysaccharides. In recent years, there have been significant efforts in planta to reduce lignin content or modify its composition to overcome the inherent recalcitrance that it imposes on lignocellulosic biomass during processing. Here, transgenic poplar lines in which monolignol ferulate conjugates were synthesized during cell wall development to introduce, during lignification, readily cleavable ester linkages into the lignin polymer backbone (i.e., "zip lignin"), along with wild-type (WT) controls, were pretreated with different ionic liquids (ILs). Themore » strategic introduction of ester bonds into the lignin backbone resulted in increased pretreatment efficiency and released more carbohydrates with lower energy input. After pretreatment with any of three different ILs, and after limited saccharification, the transgenic poplars, especially those with relatively higher amounts of incorporated monolignol ferulate conjugates, yielded up to 23% higher sugar levels compared to WT plants. Our findings clearly demonstrate that the introduction of ester linkages into the lignin polymer backbone decreases biomass recalcitrance in poplar has the potential to reduce the energy and/or amount of IL required for effective pretreatment, and could enable the development of an economically viable and sustainable biorefinery process.« less

  15. Impact of lignin polymer backbone esters on ionic liquid pretreatment of poplar

    DOE PAGES

    Kim, Kwang Ho; Dutta, Tanmoy; Ralph, John; ...

    2017-04-20

    Biomass pretreatment remains an essential step in lignocellulosic biofuel production, largely to facilitate the efficient removal of lignin and increase enzyme accessibility to the polysaccharides. In recent years, there have been significant efforts in planta to reduce lignin content or modify its composition to overcome the inherent recalcitrance that it imposes on lignocellulosic biomass during processing. Here, transgenic poplar lines in which monolignol ferulate conjugates were synthesized during cell wall development to introduce, during lignification, readily cleavable ester linkages into the lignin polymer backbone (i.e., "zip lignin"), along with wild-type (WT) controls, were pretreated with different ionic liquids (ILs). Themore » strategic introduction of ester bonds into the lignin backbone resulted in increased pretreatment efficiency and released more carbohydrates with lower energy input. After pretreatment with any of three different ILs, and after limited saccharification, the transgenic poplars, especially those with relatively higher amounts of incorporated monolignol ferulate conjugates, yielded up to 23% higher sugar levels compared to WT plants. Our findings clearly demonstrate that the introduction of ester linkages into the lignin polymer backbone decreases biomass recalcitrance in poplar has the potential to reduce the energy and/or amount of IL required for effective pretreatment, and could enable the development of an economically viable and sustainable biorefinery process.« less

  16. 76 FR 26983 - Improving Wireless Coverage Through the Use of Signal Boosters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-10

    ... Disabilities: To request materials in accessible formats for people with disabilities (braille, large [email protected] . 5. To request materials in accessible formats for people with disabilities (Braille, large... signal boosters. Specifically, the Commission proposes that marketing materials must include a...

  17. A federated capability-based access control mechanism for internet of things (IoTs)

    NASA Astrophysics Data System (ADS)

    Xu, Ronghua; Chen, Yu; Blasch, Erik; Chen, Genshe

    2018-05-01

    The prevalence of Internet of Things (IoTs) allows heterogeneous embedded smart devices to collaboratively provide intelligent services with or without human intervention. While leveraging the large-scale IoT-based applications like Smart Gird and Smart Cities, IoT also incurs more concerns on privacy and security. Among the top security challenges that IoTs face is that access authorization is critical in resource and information protection over IoTs. Traditional access control approaches, like Access Control Lists (ACL), Role-based Access Control (RBAC) and Attribute-based Access Control (ABAC), are not able to provide a scalable, manageable and efficient mechanisms to meet requirement of IoT systems. The extraordinary large number of nodes, heterogeneity as well as dynamicity, necessitate more fine-grained, lightweight mechanisms for IoT devices. In this paper, a federated capability-based access control (FedCAC) framework is proposed to enable an effective access control processes to devices, services and information in large scale IoT systems. The federated capability delegation mechanism, based on a propagation tree, is illustrated for access permission propagation. An identity-based capability token management strategy is presented, which involves registering, propagation and revocation of the access authorization. Through delegating centralized authorization decision-making policy to local domain delegator, the access authorization process is locally conducted on the service provider that integrates situational awareness (SAW) and customized contextual conditions. Implemented and tested on both resources-constrained devices, like smart sensors and Raspberry PI, and non-resource-constrained devices, like laptops and smart phones, our experimental results demonstrate the feasibility of the proposed FedCAC approach to offer a scalable, lightweight and fine-grained access control solution to IoT systems connected to a system network.

  18. Remote visual analysis of large turbulence databases at multiple scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  19. Remote visual analysis of large turbulence databases at multiple scales

    DOE PAGES

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin; ...

    2018-06-15

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  20. Convenience stores are the key food environment influence on nutrients available from household food supplies in Texas Border Colonias

    PubMed Central

    2013-01-01

    Background Few studies have focused on the relationship between the retail food environment and household food supplies. This study examines spatial access to retail food stores, food shopping habits, and nutrients available in household food supplies among 50 Mexican-origin families residing in Texas border colonias. Methods The design was cross-sectional; data were collected in the home March to June 2010 by promotora-researchers. Ground-truthed methods enumerated traditional (supercenters, supermarkets, grocery stores), convenience (convenience stores and food marts), and non-traditional (dollar stores, discount stores) retail food stores. Spatial access was computed using the network distance from each participant’s residence to each food store. Data included survey data and two household food inventories (HFI) of the presence and amount of food items in the home. The Spanish language interviewer-administered survey included demographics, transportation access, food purchasing, food and nutrition assistance program participation, and the 18-item Core Food Security Module. Nutrition Data Systems for Research (NDS-R) was used to calculate HFI nutrients. Adult equivalent adjustment constants (AE), based on age and gender calorie needs, were calculated based on the age- and gender composition of each household and used to adjust HFI nutrients for household composition. Data were analyzed using bivariate analysis and linear regression models to determine the association of independent variables with the availability of each AE-adjusted nutrient. Results Regression models showed that households in which the child independently purchased food from a convenience store at least once a week had foods and beverages with increased amounts of total energy, total fat, and saturated fat. A greater distance to the nearest convenience store was associated with reduced amounts of total energy, vitamin D, total sugar, added sugar, total fat, and saturated fat. Participation in the National School Lunch Program (NSLP) was associated with lower household levels of total energy, calcium, vitamin C, sodium, vitamin D, and saturated fat. Spatial access and utilization of supermarkets and dollar stores were not associated with nutrient availability. Conclusions Although household members frequently purchased food items from supermarkets or dollar stores, it was spatial access to and frequent utilization of convenience food stores that influenced the amount of nutrients present in Texas border colonia households. These findings also suggest that households which participate in NSLP have reduced AE-adjusted nutrients available in the home. The next step will target changes within convenience stores to improve in-store marketing of foods and beverages to children and adults. PMID:23327426

  1. Convenience stores are the key food environment influence on nutrients available from household food supplies in Texas Border Colonias.

    PubMed

    Sharkey, Joseph R; Dean, Wesley R; Nalty, Courtney C; Xu, Jin

    2013-01-17

    Few studies have focused on the relationship between the retail food environment and household food supplies. This study examines spatial access to retail food stores, food shopping habits, and nutrients available in household food supplies among 50 Mexican-origin families residing in Texas border colonias. The design was cross-sectional; data were collected in the home March to June 2010 by promotora-researchers. Ground-truthed methods enumerated traditional (supercenters, supermarkets, grocery stores), convenience (convenience stores and food marts), and non-traditional (dollar stores, discount stores) retail food stores. Spatial access was computed using the network distance from each participant's residence to each food store. Data included survey data and two household food inventories (HFI) of the presence and amount of food items in the home. The Spanish language interviewer-administered survey included demographics, transportation access, food purchasing, food and nutrition assistance program participation, and the 18-item Core Food Security Module. Nutrition Data Systems for Research (NDS-R) was used to calculate HFI nutrients. Adult equivalent adjustment constants (AE), based on age and gender calorie needs, were calculated based on the age- and gender composition of each household and used to adjust HFI nutrients for household composition. Data were analyzed using bivariate analysis and linear regression models to determine the association of independent variables with the availability of each AE-adjusted nutrient. Regression models showed that households in which the child independently purchased food from a convenience store at least once a week had foods and beverages with increased amounts of total energy, total fat, and saturated fat. A greater distance to the nearest convenience store was associated with reduced amounts of total energy, vitamin D, total sugar, added sugar, total fat, and saturated fat. Participation in the National School Lunch Program (NSLP) was associated with lower household levels of total energy, calcium, vitamin C, sodium, vitamin D, and saturated fat. Spatial access and utilization of supermarkets and dollar stores were not associated with nutrient availability. Although household members frequently purchased food items from supermarkets or dollar stores, it was spatial access to and frequent utilization of convenience food stores that influenced the amount of nutrients present in Texas border colonia households. These findings also suggest that households which participate in NSLP have reduced AE-adjusted nutrients available in the home. The next step will target changes within convenience stores to improve in-store marketing of foods and beverages to children and adults.

  2. What is the Best Way to Treat Diarrhea?

    MedlinePlus

    ... mode Turn off more accessible mode Skip Ribbon Commands Skip to main content Turn off Animations Turn ... starts making normal amounts of urine again. Reminder–Do's and Don'ts Do Watch for signs of ...

  3. 78 FR 28000 - Entergy Louisiana, LLC and Entergy Operations, Inc.; Waterford Stream Electric Station, Unit No...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-13

    ... Regulation satisfactory documentary evidence that New ELL has obtained the appropriate amount of insurance..., see the initial application dated September 27, 2012 (Agencywide Documents Access and Management...

  4. Community-Based Services that Facilitate Interoperability and Intercomparison of Precipitation Datasets from Multiple Sources

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Kempler, Steven; Teng, William; Leptoukh, Gregory; Ostrenga, Dana

    2010-01-01

    Over the past 12 years, large volumes of precipitation data have been generated from space-based observatories (e.g., TRMM), merging of data products (e.g., gridded 3B42), models (e.g., GMAO), climatologies (e.g., Chang SSM/I derived rain indices), field campaigns, and ground-based measuring stations. The science research, applications, and education communities have greatly benefited from the unrestricted availability of these data from the Goddard Earth Sciences Data and Information Services Center (GES DISC) and, in particular, the services tailored toward precipitation data access and usability. In addition, tools and services that are responsive to the expressed evolving needs of the precipitation data user communities have been developed at the Precipitation Data and Information Services Center (PDISC) (http://disc.gsfc.nasa.gov/precipitation or google NASA PDISC), located at the GES DISC, to provide users with quick data exploration and access capabilities. In recent years, data management and access services have become increasingly sophisticated, such that they now afford researchers, particularly those interested in multi-data set science analysis and/or data validation, the ability to homogenize data sets, in order to apply multi-variant, comparison, and evaluation functions. Included in these services is the ability to capture data quality and data provenance. These interoperability services can be directly applied to future data sets, such as those from the Global Precipitation Measurement (GPM) mission. This presentation describes the data sets and services at the PDISC that are currently used by precipitation science and applications researchers, and which will be enhanced in preparation for GPM and associated multi-sensor data research. Specifically, the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) will be illustrated. Giovanni enables scientific exploration of Earth science data without researchers having to perform the complicated data access and match-up processes. In addition, PDISC tool and service capabilities being adapted for GPM data will be described, including the Google-like Mirador data search and access engine; semantic technology to help manage large amounts of multi-sensor data and their relationships; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion to various formats (e.g., netCDF, HDF, KML (for Google Earth)); visualization and analysis of Level 2 data profiles and maps; parameter and spatial subsetting; time and temporal aggregation; regridding; data version control and provenance; continuous archive verification; and expertise in data-related standards and interoperability. The goal of providing these services is to further the progress towards a common framework by which data analysis/validation can be more easily accomplished.

  5. Compiling and using input-output frameworks through collaborative virtual laboratories.

    PubMed

    Lenzen, Manfred; Geschke, Arne; Wiedmann, Thomas; Lane, Joe; Anderson, Neal; Baynes, Timothy; Boland, John; Daniels, Peter; Dey, Christopher; Fry, Jacob; Hadjikakou, Michalis; Kenway, Steven; Malik, Arunima; Moran, Daniel; Murray, Joy; Nettleton, Stuart; Poruschi, Lavinia; Reynolds, Christian; Rowley, Hazel; Ugon, Julien; Webb, Dean; West, James

    2014-07-01

    Compiling, deploying and utilising large-scale databases that integrate environmental and economic data have traditionally been labour- and cost-intensive processes, hindered by the large amount of disparate and misaligned data that must be collected and harmonised. The Australian Industrial Ecology Virtual Laboratory (IELab) is a novel, collaborative approach to compiling large-scale environmentally extended multi-region input-output (MRIO) models. The utility of the IELab product is greatly enhanced by avoiding the need to lock in an MRIO structure at the time the MRIO system is developed. The IELab advances the idea of the "mother-daughter" construction principle, whereby a regionally and sectorally very detailed "mother" table is set up, from which "daughter" tables are derived to suit specific research questions. By introducing a third tier - the "root classification" - IELab users are able to define their own mother-MRIO configuration, at no additional cost in terms of data handling. Customised mother-MRIOs can then be built, which maximise disaggregation in aspects that are useful to a family of research questions. The second innovation in the IELab system is to provide a highly automated collaborative research platform in a cloud-computing environment, greatly expediting workflows and making these computational benefits accessible to all users. Combining these two aspects realises many benefits. The collaborative nature of the IELab development project allows significant savings in resources. Timely deployment is possible by coupling automation procedures with the comprehensive input from multiple teams. User-defined MRIO tables, coupled with high performance computing, mean that MRIO analysis will be useful and accessible for a great many more research applications than would otherwise be possible. By ensuring that a common set of analytical tools such as for hybrid life-cycle assessment is adopted, the IELab will facilitate the harmonisation of fragmented, dispersed and misaligned raw data for the benefit of all interested parties. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Enabling the transition towards Earth Observation Science 2.0

    NASA Astrophysics Data System (ADS)

    Mathieu, Pierre-Philippe; Desnos, Yves-Louis

    2015-04-01

    Science 2.0 refers to the rapid and systematic changes in doing Research and organising Science driven by the rapid advances in ICT and digital technologies combined with a growing demand to do Science for Society (actionable research) and in Society (co-design of knowledge). Nowadays, teams of researchers around the world can easily access a wide range of open data across disciplines and remotely process them on the Cloud, combining them with their own data to generate knowledge, develop information products for societal applications, and tackle complex integrative complex problems that could not be addressed a few years ago. Such rapid exchange of digital data is fostering a new world of data-intensive research, characterized by openness, transparency, and scrutiny and traceability of results, access to large volume of complex data, availability of community open tools, unprecedented level of computing power, and new collaboration among researchers and new actors such as citizen scientists. The EO scientific community is now facing the challenge of responding to this new paradigm in science 2.0 in order to make the most of the large volume of complex and diverse data delivered by the new generation of EO missions, and in particular the Sentinels. In this context, ESA - in particular within the framework of the Scientific Exploitation of Operational Missions (SEOM) element - is supporting a variety of activities in partnership with research communities to ease the transition and make the most of the data. These include the generation of new open tools and exploitation platforms, exploring new ways to exploit data on cloud-based platforms, dissiminate data, building new partnership with citizen scientists, and training the new generation of data scientists. The paper will give a brief overview of some of ESA activities aiming to facilitate the exploitation of large amount of data from EO missions in a collaborative, cross-disciplinary, and open way, from science to applications and education.

  7. WikiPEATia - a web based platform for assembling peatland data through ‘crowd sourcing’

    NASA Astrophysics Data System (ADS)

    Wisser, D.; Glidden, S.; Fieseher, C.; Treat, C. C.; Routhier, M.; Frolking, S. E.

    2009-12-01

    The Earth System Science community is realizing that peatlands are an important and unique terrestrial ecosystem that has not yet been well-integrated into large-scale earth system analyses. A major hurdle is the lack of accessible, geospatial data of peatland distribution, coupled with data on peatland properties (e.g., vegetation composition, peat depth, basal dates, soil chemistry, peatland class) at the global scale. This data, however, is available at the local scale. Although a comprehensive global database on peatlands probably lags similar data on more economically important ecosystems such as forests, grasslands, croplands, a large amount of field data have been collected over the past several decades. A few efforts have been made to map peatlands at large scales but existing data have not been assembled into a single geospatial database that is publicly accessible or do not depict data with a level of detail that is needed in the Earth System Science Community. A global peatland database would contribute to advances in a number of research fields such as hydrology, vegetation and ecosystem modeling, permafrost modeling, and earth system modeling. We present a Web 2.0 approach that uses state-of-the-art webserver and innovative online mapping technologies and is designed to create such a global database through ‘crowd-sourcing’. Primary functions of the online system include form-driven textual user input of peatland research metadata, spatial data input of peatland areas via a mapping interface, database editing and querying editing capabilities, as well as advanced visualization and data analysis tools. WikiPEATia provides an integrated information technology platform for assembling, integrating, and posting peatland-related geospatial datasets facilitates and encourages research community involvement. A successful effort will make existing peatland data much more useful to the research community, and will help to identify significant data gaps.

  8. A Burgeoning Crisis? A Nationwide Assessment of the Geography of Water Affordability in the United States

    PubMed Central

    Mack, Elizabeth A.; Wrase, Sarah

    2017-01-01

    While basic access to clean water is critical, another important issue is the affordability of water access for people around the globe. Prior international work has highlighted that a large proportion of consumers could not afford water if priced at full cost recovery levels. Given growing concern about affordability issues due to rising water rates, and a comparative lack of work on affordability in the developed world, as compared to the developing world, more work is needed in developed countries to understand the extent of this issue in terms of the number of households and persons impacted. To address this need, this paper assesses potential affordability issues for households in the United States using the U.S. EPA’s 4.5% affordability criteria for combined water and wastewater services. Analytical results from this paper highlight high-risk and at-risk households for water poverty or unaffordable water services. Many of these households are clustered in pockets of water poverty within counties, which is a concern for individual utility providers servicing a large proportion of customers with a financial inability to pay for water services. Results also highlight that while water rates remain comparatively affordable for many U.S. households, this trend will not continue in the future. If water rates rise at projected amounts over the next five years, conservative projections estimate that the percentage of U.S. households who will find water bills unaffordable could triple from 11.9% to 35.6%. This is a concern due to the cascading economic impacts associated with widespread affordability issues; these issues mean that utility providers could have fewer customers over which to spread the large fixed costs of water service. Unaffordable water bills also impact customers for whom water services are affordable via higher water rates to recover the costs of services that go unpaid by lower income households. PMID:28076374

  9. A Burgeoning Crisis? A Nationwide Assessment of the Geography of Water Affordability in the United States.

    PubMed

    Mack, Elizabeth A; Wrase, Sarah

    2017-01-01

    While basic access to clean water is critical, another important issue is the affordability of water access for people around the globe. Prior international work has highlighted that a large proportion of consumers could not afford water if priced at full cost recovery levels. Given growing concern about affordability issues due to rising water rates, and a comparative lack of work on affordability in the developed world, as compared to the developing world, more work is needed in developed countries to understand the extent of this issue in terms of the number of households and persons impacted. To address this need, this paper assesses potential affordability issues for households in the United States using the U.S. EPA's 4.5% affordability criteria for combined water and wastewater services. Analytical results from this paper highlight high-risk and at-risk households for water poverty or unaffordable water services. Many of these households are clustered in pockets of water poverty within counties, which is a concern for individual utility providers servicing a large proportion of customers with a financial inability to pay for water services. Results also highlight that while water rates remain comparatively affordable for many U.S. households, this trend will not continue in the future. If water rates rise at projected amounts over the next five years, conservative projections estimate that the percentage of U.S. households who will find water bills unaffordable could triple from 11.9% to 35.6%. This is a concern due to the cascading economic impacts associated with widespread affordability issues; these issues mean that utility providers could have fewer customers over which to spread the large fixed costs of water service. Unaffordable water bills also impact customers for whom water services are affordable via higher water rates to recover the costs of services that go unpaid by lower income households.

  10. Imaging near surface mineral targets with ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Dales, P.; Audet, P.; Olivier, G.

    2017-12-01

    To keep up with global metal and mineral demand, new ore-deposits have to be discovered on a regular basis. This task is becoming increasingly difficult, since easily accessible deposits have been exhausted to a large degree. The typical procedure for mineral exploration begins with geophysical surveys followed by a drilling program to investigate potential targets. Since the retrieved drill core samples are one-dimensional observations, the many holes needed to interpolate and interpret potential deposits can lead to very high costs. To reduce the amount of drilling, active seismic imaging is sometimes used as an intermediary, however the active sources (e.g. large vibrating trucks or explosive shots) are expensive and unsuitable for operation in remote or environmentally sensitive areas. In recent years, passive seismic imaging using ambient noise has emerged as a novel, low-cost and environmentally sensitive approach for exploring the sub-surface. This technique dispels with active seismic sources and instead uses ambient seismic noise such as ocean waves, traffic or minor earthquakes. Unfortunately at this point, passive surveys are not capable of reaching the required resolution to image the vast majority of the ore-bodies that are being explored. In this presentation, we will show the results of an experiment where ambient seismic noise recorded on 60 seismic stations was used to image a near-mine target. The target consists of a known ore-body that has been partially exhausted by mining efforts roughly 100 years ago. The experiment examined whether ambient seismic noise interferometry can be used to image the intact and exhausted ore deposit. A drilling campaign was also conducted near the target which offers the opportunity to compare the two methods. If the accuracy and resolution of passive seismic imaging can be improved to that of active surveys (and beyond), this method could become an inexpensive intermediary step in the exploration process and result in a large decrease in the amount of drilling required to investigate and identify high-grade ore deposits.

  11. Utilizing Solar Power Technologies for On-Orbit Propellant Production

    NASA Technical Reports Server (NTRS)

    Fikes, John C.; Howell, Joe T.; Henley, Mark W.

    2006-01-01

    The cost of access to space beyond low Earth orbit may be reduced if vehicles can refuel in orbit. The cost of access to low Earth orbit may also be reduced by launching oxygen and hydrogen propellants in the form of water. To achieve this reduction in costs of access to low Earth orbit and beyond, a propellant depot is considered that electrolyzes water in orbit, then condenses and stores cryogenic oxygen and hydrogen. Power requirements for such a depot require Solar Power Satellite technologies. A propellant depot utilizing solar power technologies is discussed in this paper. The depot will be deployed in a 400 km circular equatorial orbit. It receives tanks of water launched into a lower orbit from Earth, converts the water to liquid hydrogen and oxygen, and stores up to 500 metric tons of cryogenic propellants. This requires a power system that is comparable to a large Solar Power Satellite capable of several 100 kW of energy. Power is supplied by a pair of solar arrays mounted perpendicular to the orbital plane, which rotates once per orbit to track the Sun. The majority of the power is used to run the electrolysis system. Thermal control is maintained by body-mounted radiators; these also provide some shielding against orbital debris. The propellant stored in the depot can support transportation from low Earth orbit to geostationary Earth orbit, the Moon, LaGrange points, Mars, etc. Emphasis is placed on the Water-Ice to Cryogen propellant production facility. A very high power system is required for cracking (electrolyzing) the water and condensing and refrigerating the resulting oxygen and hydrogen. For a propellant production rate of 500 metric tons (1,100,000 pounds) per year, an average electrical power supply of 100 s of kW is required. To make the most efficient use of space solar power, electrolysis is performed only during the portion of the orbit that the Depot is in sunlight, so roughly twice this power level is needed for operations in sunlight (slightly over half of the time). This power level mandates large solar arrays, using advanced Space Solar Power technology. A significant amount of the power has to be dissipated as heat, through large radiators. This paper briefly describes the propellant production facility and the requirements for a high power system capability. The Solar Power technologies required for such an endeavor are discussed.

  12. Chemometrics-based Approach in Analysis of Arnicae flos

    PubMed Central

    Zheleva-Dimitrova, Dimitrina Zh.; Balabanova, Vessela; Gevrenova, Reneta; Doichinova, Irini; Vitkova, Antonina

    2015-01-01

    Introduction: Arnica montana flowers have a long history as herbal medicines for external use on injuries and rheumatic complaints. Objective: To investigate Arnicae flos of cultivated accessions from Bulgaria, Poland, Germany, Finland, and Pharmacy store for phenolic derivatives and sesquiterpene lactones (STLs). Materials and Methods: Samples of Arnica from nine origins were prepared by ultrasound-assisted extraction with 80% methanol for phenolic compounds analysis. Subsequent reverse-phase high-performance liquid chromatography (HPLC) separation of the analytes was performed using gradient elution and ultraviolet detection at 280 and 310 nm (phenolic acids), and 360 nm (flavonoids). Total STLs were determined in chloroform extracts by solid-phase extraction-HPLC at 225 nm. The HPLC generated chromatographic data were analyzed using principal component analysis (PCA) and hierarchical clustering (HC). Results: The highest total amount of phenolic acids was found in the sample from Botanical Garden at Joensuu University, Finland (2.36 mg/g dw). Astragalin, isoquercitrin, and isorhamnetin 3-glucoside were the main flavonol glycosides being present up to 3.37 mg/g (astragalin). Three well-defined clusters were distinguished by PCA and HC. Cluster C1 comprised of the German and Finnish accessions characterized by the highest content of flavonols. Cluster C2 included the Bulgarian and Polish samples presenting a low content of flavonoids. Cluster C3 consisted only of one sample from a pharmacy store. Conclusion: A validated HPLC method for simultaneous determination of phenolic acids, flavonoid glycosides, and aglycones in A. montana flowers was developed. The PCA loading plot showed that quercetin, kaempferol, and isorhamnetin can be used to distinguish different Arnica accessions. SUMMARY A principal component analysis (PCA) on 13 phenolic compounds and total amount of sesquiterpene lactones in Arnicae flos collection tended to cluster the studied 9 accessions into three main groups. The profiles obtained demonstrated that the samples from Germany and Finland are characterized by greater amounts of phenolic derivatives than the Bulgarian and Polish ones. The PCA loading plot showed that quercetin, kaemferol and isorhamnetin can be used to distinguish different arnica accessions. PMID:27013791

  13. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  14. Advanced Teleprocessing Systems.

    DTIC Science & Technology

    1984-09-30

    up delay depends on the amount of work (or the number of customers) ar- riving to the system at the beginning of the start-up period. 2) A system...types of systems are analyzed: 1) A system where the start-up delay depends on the amount of work (or the number of customers) ar- riving to the system...net- work where terminals are randomly distributed on the plane, are able to capture transmitted signals, and use slotted ALOHA to access the chan- nel

  15. Use of tropical maize for bioethanol production

    USDA-ARS?s Scientific Manuscript database

    Tropical maize is an alternative energy crop being considered as a feedstock for bioethanol production in the North Central and Midwest United States. Tropical maize is advantageous because it produces large amounts of soluble sugars in its stalks, creates a large amount of biomass, and requires lo...

  16. The media and access issues: content analysis of Canadian newspaper coverage of health policy decisions.

    PubMed

    Rachul, Christen; Caulfield, Timothy

    2015-08-25

    Previous studies have demonstrated how the media has an influence on policy decisions and healthcare coverage. Studies of Canadian media have shown that news coverage often emphasizes and hypes certain aspects of high profile health debates. We hypothesized that in Canadian media coverage of access to healthcare issues about therapies and technologies including for rare diseases, the media would be largely sympathetic towards patients, thus adding to public debate that largely favors increased access to healthcare-even in the face of equivocal evidence regarding efficacy. In order to test this hypothesis, we conducted a content analysis of 530 news articles about access to health therapies and technologies from 15 major Canadian newspapers over a 10-year period. Articles were analyzed for the perspectives presented in the articles and the types of reasons or arguments presented either for or against the particular access issue portrayed in the news articles. We found that news media coverage was largely sympathetic towards increasing healthcare funding and ease of access to healthcare (77.4 %). Rare diseases and orphan drugs were the most common issues raised (22.6 %). Patients perspectives were often highlighted in articles (42.3 %). 96.8 % of articles discussed why access to healthcare needs to increase, and discussion that questioned increased access was only included in 33.6 % articles. We found that news media favors a patient access ethos, which may contribute to a difficult policy-making environment.

  17. An Assessment of the Food and Nutrition Security Status of Weaned 7–12 Months Old Children in Rural and Peri-Urban Communities of Gauteng and Limpopo Provinces, South Africa

    PubMed Central

    Siwela, Muthulisi; Kolanisi, Unathi; Abdelgadir, Hafiz; Ndhlala, Ashwell

    2017-01-01

    This study assessed the food and nutrition security status of children receiving complementary food in rural and peri-urban communities. A group of 106 mothers from Lebowakgomo village and Hammanskraal Township, respectively, participated in the survey. Additionally, six focus group discussions were conducted per study area to assess the mothers’ perceptions about children’s food access. The Children’s Food Insecurity Access Scale (CFIAS) was used to assess the food security status (access) of the children. The Individual Dietary Diversity Score (IDDS) together with the unquantified food consumption frequency survey were used as a proxy measure of the nutritional quality of the children’s diets. The age and weight of the children obtained from the children’s clinic health cards were used to calculate Weight-for-Age Z scores (WAZ) in order to determine the prevalence of underweight children. The findings showed that a large percentage of children were severely food-insecure, 87% and 78%, in rural and peri-urban areas, respectively. Additionally, Lebowakgomo children (23.6%) and Hammanskraal children (17.9%) were severely underweight. Overall, children’s diets in both study areas was characterized by nutrient-deficient complementary foods. Cheaper foods with a longer stomach-filling effect such as white maize meal and sugar were the most commonly purchased and used. Hence, the children consumed very limited amounts of foods rich in proteins, minerals, and vitamins, which significantly increased the risk of their being malnourished. PMID:28862694

  18. High-density genotyping of the A.E. Watkins Collection of hexaploid landraces identifies a large molecular diversity compared to elite bread wheat.

    PubMed

    Winfield, Mark O; Allen, Alexandra M; Wilkinson, Paul A; Burridge, Amanda J; Barker, Gary L A; Coghill, Jane; Waterfall, Christy; Wingen, Luzie U; Griffiths, Simon; Edwards, Keith J

    2018-01-01

    The importance of wheat as a food crop makes it a major target for agricultural improvements. As one of the most widely grown cereal grains, together with maize and rice, wheat is the leading provider of calories in the global diet, constituting 29% of global cereal production in 2015. In the last few decades, however, yields have plateaued, suggesting that the green revolution, at least for wheat, might have run its course and that new sources of genetic variation are urgently required. The overall aim of our work was to identify novel variation that may then be used to enable the breeding process. As landraces are a potential source of such diversity, here we have characterized the A.E. Watkins Collection alongside a collection of elite accessions using two complementary high-density and high-throughput genotyping platforms. While our results show the importance of using the appropriate SNP collection to compare diverse accessions, they also show that the Watkins Collection contains a substantial amount of novel genetic diversity which has either not been captured in current breeding programmes or which has been lost through previous selection pressures. As a consequence of our analysis, we have identified a number of accessions which carry an array of novel alleles along with a number of interesting chromosome rearrangements which confirm the variable nature of the wheat genome. © 2017 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  19. Building the informatics infrastructure for comparative effectiveness research (CER): a review of the literature.

    PubMed

    Lopez, Marianne Hamilton; Holve, Erin; Sarkar, Indra Neil; Segal, Courtney

    2012-07-01

    Technological advances in clinical informatics have made large amounts of data accessible and potentially useful for research. As a result, a burgeoning literature addresses efforts to bridge the fields of health services research and biomedical informatics. The Electronic Data Methods Forum review examines peer-reviewed literature at the intersection of comparative effectiveness research and clinical informatics. The authors are specifically interested in characterizing this literature and identifying cross-cutting themes and gaps in the literature. A 3-step systematic literature search was conducted, including a structured search of PubMed, manual reviews of articles from selected publication lists, and manual reviews of research activities based on prospective electronic clinical data. Two thousand four hundred thirty-five citations were identified as potentially relevant. Ultimately, a full-text review was performed for 147 peer-reviewed papers. One hundred thirty-two articles were selected for inclusion in the review. Of these, 88 articles are the focus of the discussion in this paper. Three types of articles were identified, including papers that: (1) provide historical context or frameworks for using clinical informatics for research, (2) describe platforms and projects, and (3) discuss issues, challenges, and applications of natural language processing. In addition, 2 cross-cutting themes emerged: the challenges of conducting research in the absence of standardized ontologies and data collection; and unique data governance concerns related to the transfer, storage, deidentification, and access to electronic clinical data. Finally, the authors identified several current gaps on important topics such as the use of clinical informatics for cohort identification, cloud computing, and single point access to research data.

  20. Landspotting: Social gaming to collect vast amounts of data for satellite validation

    NASA Astrophysics Data System (ADS)

    Fritz, S.; Purgathofer, P.; Kayali, F.; Fellner, M.; Wimmer, M.; Sturn, T.; Triebnig, G.; Krause, S.; Schindler, F.; Kollegger, M.; Perger, C.; Dürauer, M.; Haberl, W.; See, L.; McCallum, I.

    2012-04-01

    At present there is no single satellite-derived global land cover product that is accurate enough to provide reliable estimates of forest or cropland area to determine, e.g., how much additional land is available to grow biofuels or to tackle problems of food security. The Landspotting Project aims to improve the quality of this land cover information by vastly increasing the amount of in-situ validation data available for calibration and validation of satellite-derived land cover. The Geo-Wiki (Geo-Wiki.org) system currently allows users to compare three satellite derived land cover products and validate them using Google Earth. However, there is presently no incentive for anyone to provide this data so the amount of validation through Geo-Wiki has been limited. However, recent competitions have proven that incentive driven campaigns can rapidly create large amounts of input. The LandSpotting Project is taking a truly innovative approach through the development of the Landspotting game. The game engages users whilst simultaneously collecting a large amount of in-situ land cover information. The development of the game is informed by the current raft of successful social gaming that is available on the internet and as mobile applications, many of which are geo-spatial in nature. Games that are integrated within a social networking site such as Facebook illustrate the power to reach and continually engage a large number of individuals. The number of active Facebook users is estimated to be greater than 400 million, where 100 million are accessing Facebook from mobile devices. The Landspotting Game has similar game mechanics as the famous strategy game "Civilization" (i.e. build, harvest, research, war, diplomacy, etc.). When a player wishes to make a settlement, they must first classify the land cover over the area they wish to settle. As the game is played on the earth surface with Google Maps, we are able to record and store this land cover/land use classification geographically. Every player can play the game for free (i.e. a massive multiplayer online game). Furthermore, it is a social game on Facebook (e.g. invite friends, send friends messages, purchase gifts, help friends, post messages onto the wall, etc). The game is played in a web browser, therefore it runs everywhere (where Flash is supported) without requiring the user to install anything additional. At the same time, the Geo-Wiki system will be modified to use the acquired in-situ validation information to create new outputs: a hybrid land cover map, which takes the best information from each individual product to create a single integrated version; a database of validation points that will be freely available to the land cover user community; and a facility that allows users to create a specific targeted validation area, which will then be provided to the crowdsourcing community for validation. These outputs will turn Geo-Wiki into a valuable system for earth system scientists.

  1. C-A1-03: Considerations in the Design and Use of an Oracle-based Virtual Data Warehouse

    PubMed Central

    Bredfeldt, Christine; McFarland, Lela

    2011-01-01

    Background/Aims The amount of clinical data available for research is growing exponentially. As it grows, increasing the efficiency of both data storage and data access becomes critical. Relational database management systems (rDBMS) such as Oracle are ideal solutions for managing longitudinal clinical data because they support large-scale data storage and highly efficient data retrieval. In addition, they can greatly simplify the management of large data warehouses, including security management and regular data refreshes. However, the HMORN Virtual Data Warehouse (VDW) was originally designed based on SAS datasets, and this design choice has a number of implications for both the design and use of an Oracle-based VDW. From a design standpoint, VDW tables are designed as flat SAS datasets, which do not take full advantage of Oracle indexing capabilities. From a data retrieval standpoint, standard VDW SAS scripts do not take advantage of SAS pass-through SQL capabilities to enable Oracle to perform the processing required to narrow datasets to the population of interest. Methods Beginning in 2009, the research department at Kaiser Permanente in the Mid-Atlantic States (KPMA) has developed an Oracle-based VDW according to the HMORN v3 specifications. In order to take advantage of the strengths of relational databases, KPMA introduced an interface layer to the VDW data, using views to provide access to standardized VDW variables. In addition, KPMA has developed SAS programs that provide access to SQL pass-through processing for first-pass data extraction into SAS VDW datasets for processing by standard VDW scripts. Results We discuss both the design and performance considerations specific to the KPMA Oracle-based VDW. We benchmarked performance of the Oracle-based VDW using both standard VDW scripts and an initial pre-processing layer to evaluate speed and accuracy of data return. Conclusions Adapting the VDW for deployment in an Oracle environment required minor changes to the underlying structure of the data. Further modifications of the underlying data structure would lead to performance enhancements. Maximally efficient data access for standard VDW scripts requires an extra step that involves restricting the data to the population of interest at the data server level prior to standard processing.

  2. Effects of imputation on correlation: implications for analysis of mass spectrometry data from multiple biological matrices.

    PubMed

    Taylor, Sandra L; Ruhaak, L Renee; Kelly, Karen; Weiss, Robert H; Kim, Kyoungmi

    2017-03-01

    With expanded access to, and decreased costs of, mass spectrometry, investigators are collecting and analyzing multiple biological matrices from the same subject such as serum, plasma, tissue and urine to enhance biomarker discoveries, understanding of disease processes and identification of therapeutic targets. Commonly, each biological matrix is analyzed separately, but multivariate methods such as MANOVAs that combine information from multiple biological matrices are potentially more powerful. However, mass spectrometric data typically contain large amounts of missing values, and imputation is often used to create complete data sets for analysis. The effects of imputation on multiple biological matrix analyses have not been studied. We investigated the effects of seven imputation methods (half minimum substitution, mean substitution, k-nearest neighbors, local least squares regression, Bayesian principal components analysis, singular value decomposition and random forest), on the within-subject correlation of compounds between biological matrices and its consequences on MANOVA results. Through analysis of three real omics data sets and simulation studies, we found the amount of missing data and imputation method to substantially change the between-matrix correlation structure. The magnitude of the correlations was generally reduced in imputed data sets, and this effect increased with the amount of missing data. Significant results from MANOVA testing also were substantially affected. In particular, the number of false positives increased with the level of missing data for all imputation methods. No one imputation method was universally the best, but the simple substitution methods (Half Minimum and Mean) consistently performed poorly. © The Author 2016. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  3. Finding Food Deserts: A Comparison of Methods Measuring Spatial Access to Food Stores.

    PubMed

    Jaskiewicz, Lara; Block, Daniel; Chavez, Noel

    2016-05-01

    Public health research has increasingly focused on how access to resources affects health behaviors. Mapping environmental factors, such as distance to a supermarket, can identify intervention points toward improving food access in low-income and minority communities. However, the existing literature provides little guidance on choosing the most appropriate measures of spatial access. This study compared the results of different measures of spatial access to large food stores and the locations of high and low access identified by each. The data set included U.S. Census population data and the locations of large food stores in the six-county area around Chicago, Illinois. Six measures of spatial access were calculated at the census block group level and the results compared. The analysis found that there was little agreement in the identified locations of high or low access between measures. This study illustrates the importance of considering the access measure used when conducting research, interpreting results, or comparing studies. Future research should explore the correlation of different measures with health behaviors and health outcomes. © 2015 Society for Public Health Education.

  4. Technological Networks

    NASA Astrophysics Data System (ADS)

    Mitra, Bivas

    The study of networks in the form of mathematical graph theory is one of the fundamental pillars of discrete mathematics. However, recent years have witnessed a substantial new movement in network research. The focus of the research is shifting away from the analysis of small graphs and the properties of individual vertices or edges to consideration of statistical properties of large scale networks. This new approach has been driven largely by the availability of technological networks like the Internet [12], World Wide Web network [2], etc. that allow us to gather and analyze data on a scale far larger than previously possible. At the same time, technological networks have evolved as a socio-technological system, as the concepts of social systems that are based on self-organization theory have become unified in technological networks [13]. In today’s society, we have a simple and universal access to great amounts of information and services. These information services are based upon the infrastructure of the Internet and the World Wide Web. The Internet is the system composed of ‘computers’ connected by cables or some other form of physical connections. Over this physical network, it is possible to exchange e-mails, transfer files, etc. On the other hand, the World Wide Web (commonly shortened to the Web) is a system of interlinked hypertext documents accessed via the Internet where nodes represent web pages and links represent hyperlinks between the pages. Peer-to-peer (P2P) networks [26] also have recently become a popular medium through which huge amounts of data can be shared. P2P file sharing systems, where files are searched and downloaded among peers without the help of central servers, have emerged as a major component of Internet traffic. An important advantage in P2P networks is that all clients provide resources, including bandwidth, storage space, and computing power. In this chapter, we discuss these technological networks in detail. The review is organized as follows. Section 2 presents an introduction to the Internet and different protocols related to it. This section also specifies the socio-technological properties of the Internet, like scale invariance, the small-world property, network resilience, etc. Section 3 describes the P2P networks, their categorization, and other related issues like search, stability, etc. Section 4 concludes the chapter.

  5. Random-access algorithms for multiuser computer communication networks. Doctoral thesis, 1 September 1986-31 August 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papantoni-Kazakos, P.; Paterakis, M.

    1988-07-01

    For many communication applications with time constraints (e.g., transmission of packetized voice messages), a critical performance measure is the percentage of messages transmitted within a given amount of time after their generation at the transmitting station. This report presents a random-access algorithm (RAA) suitable for time-constrained applications. Performance analysis demonstrates that significant message-delay improvement is attained at the expense of minimal traffic loss. Also considered is the case of noisy channels. The noise effect appears at erroneously observed channel feedback. Error sensitivity analysis shows that the proposed random-access algorithm is insensitive to feedback channel errors. Window Random-Access Algorithms (RAAs) aremore » considered next. These algorithms constitute an important subclass of Multiple-Access Algorithms (MAAs); they are distributive, and they attain high throughput and low delays by controlling the number of simultaneously transmitting users.« less

  6. Cross-polar transport and scavenging of Siberian aerosols containing black carbon during the 2012 ACCESS summer campaign

    NASA Astrophysics Data System (ADS)

    Raut, Jean-Christophe; Marelle, Louis; Fast, Jerome D.; Thomas, Jennie L.; Weinzierl, Bernadett; Law, Katharine S.; Berg, Larry K.; Roiger, Anke; Easter, Richard C.; Heimerl, Katharina; Onishi, Tatsuo; Delanoë, Julien; Schlager, Hans

    2017-09-01

    During the ACCESS airborne campaign in July 2012, extensive boreal forest fires resulted in significant aerosol transport to the Arctic. A 10-day episode combining intense biomass burning over Siberia and low-pressure systems over the Arctic Ocean resulted in efficient transport of plumes containing black carbon (BC) towards the Arctic, mostly in the upper troposphere (6-8 km). A combination of in situ observations (DLR Falcon aircraft), satellite analysis and WRF-Chem simulations is used to understand the vertical and horizontal transport mechanisms of BC with a focus on the role of wet removal. Between the northwestern Norwegian coast and the Svalbard archipelago, the Falcon aircraft sampled plumes with enhanced CO concentrations up to 200 ppbv and BC mixing ratios up to 25 ng kg-1. During transport to the Arctic region, a large fraction of BC particles are scavenged by two wet deposition processes, namely wet removal by large-scale precipitation and removal in wet convective updrafts, with both processes contributing almost equally to the total accumulated deposition of BC. Our results underline that applying a finer horizontal resolution (40 instead of 100 km) improves the model performance, as it significantly reduces the overestimation of BC levels observed at a coarser resolution in the mid-troposphere. According to the simulations at 40 km, the transport efficiency of BC (TEBC) in biomass burning plumes was larger (60 %), because it was impacted by small accumulated precipitation along trajectory (1 mm). In contrast TEBC was small (< 30 %) and accumulated precipitation amounts were larger (5-10 mm) in plumes influenced by urban anthropogenic sources and flaring activities in northern Russia, resulting in transport to lower altitudes. TEBC due to large-scale precipitation is responsible for a sharp meridional gradient in the distribution of BC concentrations. Wet removal in cumulus clouds is the cause of modeled vertical gradient of TEBC, especially in the mid-latitudes, reflecting the distribution of convective precipitation, but is dominated in the Arctic region by the large-scale wet removal associated with the formation of stratocumulus clouds in the planetary boundary layer (PBL) that produce frequent drizzle.

  7. Open NASA Earth Exchange (OpenNEX): A Public-Private Partnership for Climate Change Research

    NASA Astrophysics Data System (ADS)

    Nemani, R. R.; Lee, T. J.; Michaelis, A.; Ganguly, S.; Votava, P.

    2014-12-01

    NASA Earth Exchange (NEX) is a data, computing and knowledge collaborative that houses satellite, climate and ancillary data where a community of researchers can come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform with access to large supercomputing resources. As a part of broadening the community beyond NASA-funded researchers, NASA through an agreement with Amazon Inc. made available to the public a large collection of Climate and Earth Sciences satellite data. The data, available through the Open NASA Earth Exchange (OpenNEX) platform hosted by Amazon Web Services (AWS) public cloud, consists of large amounts of global land surface imaging, vegetation conditions, climate observations and climate projections. In addition to the data, users of OpenNEX platform can also watch lectures from leading experts, learn basic access and use of the available data sets. In order to advance White House initiatives such as Open Data, Big Data and Climate Data and the Climate Action Plan, NASA over the past six months conducted the OpenNEX Challenge. The two-part challenge was designed to engage the public in creating innovative ways to use NASA data and address climate change impacts on economic growth, health and livelihood. Our intention was that the challenges allow citizen scientists to realize the value of NASA data assets and offers NASA new ideas on how to share and use that data. The first "ideation" challenge, closed on July 31st attracted over 450 participants consisting of climate scientists, hobbyists, citizen scientists, IT experts and App developers. Winning ideas from the first challenge will be incorporated into the second "builder" challenge currently targeted to launch mid-August and close by mid-November. The winner(s) will be formally announced at AGU in December of 2014. We will share our experiences and lessons learned over the past year from OpenNEX, a public-private partnership for engaging and enabling a large community of citizen scientists to better understand global climate changes and in creating climate resilience.

  8. 38 CFR 21.5820 - Educational assistance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... consumable materials used as part of classroom or laboratory instruction. (2) Educational expenses may not... benefits from the educational assistance test program. (Authority: 10 U.S.C. 2143(a)) (b) Amount of... printed volume and on GPO Access. ...

  9. [Factor associated with medicines utilization and expenditure in Mexico].

    PubMed

    Wirtz, Veronika J; Serván-Mori, Edson; Heredia-Pi, Ileana; Dreser, Anahí; Ávila-Burgos, Leticia

    2013-01-01

    To analyze medicine utilization and expenditure and associated factors in Mexico, as well as to discuss their implications for pharmaceutical policy. Analysis of a sample of 193,228 individuals from the Mexican National Health and Nutrition Survey 2012. Probability and amount of expenditure were estimated using logit, probit and quantile regression models, evaluating three dimensions of access to medicines: (1) likelihood of utilization of medicines in the event of a health problem, (2) probability of incurring expenses and (3) amount spent on medicines. Individuals affiliated to IMSS were more likely to use medicines (OR=1.2, p<0.05). Being affiliated to the IMSS, ISSSTE or SP reduced the likelihood of spending compared to those without health insurance (about RM 0.7, p<0.01). Median expenditures varied between 195.3 and 274.2 pesos. Factors associated with the use and expenditure on medicines indicate that inequities in the access to medicines persist.

  10. Gene Expression Browser: Large-Scale and Cross-Experiment Microarray Data Management, Search & Visualization

    USDA-ARS?s Scientific Manuscript database

    The amount of microarray gene expression data in public repositories has been increasing exponentially for the last couple of decades. High-throughput microarray data integration and analysis has become a critical step in exploring the large amount of expression data for biological discovery. Howeve...

  11. 'Sciencenet'--towards a global search and share engine for all scientific knowledge.

    PubMed

    Lütjohann, Dominic S; Shah, Asmi H; Christen, Michael P; Richter, Florian; Knese, Karsten; Liebel, Urban

    2011-06-15

    Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, 'Sciencenet', which facilitates rapid searching over this large data space. By 'bringing the search engine to the data', we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the 'AskMe' experiment publisher is written in Python 2.7, and the backend 'YaCy' search engine is based on Java 1.6.

  12. Medical sieve: a cognitive assistant for radiologists and cardiologists

    NASA Astrophysics Data System (ADS)

    Syeda-Mahmood, T.; Walach, E.; Beymer, D.; Gilboa-Solomon, F.; Moradi, M.; Kisilev, P.; Kakrania, D.; Compas, C.; Wang, H.; Negahdar, R.; Cao, Y.; Baldwin, T.; Guo, Y.; Gur, Y.; Rajan, D.; Zlotnick, A.; Rabinovici-Cohen, S.; Ben-Ari, R.; Guy, Amit; Prasanna, P.; Morey, J.; Boyko, O.; Hashoul, S.

    2016-03-01

    Radiologists and cardiologists today have to view large amounts of imaging data relatively quickly leading to eye fatigue. Further, they have only limited access to clinical information relying mostly on their visual interpretation of imaging studies for their diagnostic decisions. In this paper, we present Medical Sieve, an automated cognitive assistant for radiologists and cardiologists designed to help in their clinical decision-making. The sieve is a clinical informatics system that collects clinical, textual and imaging data of patients from electronic health records systems. It then analyzes multimodal content to detect anomalies if any, and summarizes the patient record collecting all relevant information pertinent to a chief complaint. The results of anomaly detection are then fed into a reasoning engine which uses evidence from both patient-independent clinical knowledge and large-scale patient-driven similar patient statistics to arrive at potential differential diagnosis to help in clinical decision making. In compactly summarizing all relevant information to the clinician per chief complaint, the system still retains links to the raw data for detailed review providing holistic summaries of patient conditions. Results of clinical studies in the domains of cardiology and breast radiology have already shown the promise of the system in differential diagnosis and imaging studies summarization.

  13. Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)

    1998-01-01

    Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.

  14. Forecasting of monsoon heavy rains: challenges in NWP

    NASA Astrophysics Data System (ADS)

    Sharma, Kuldeep; Ashrit, Raghavendra; Iyengar, Gopal; Bhatla, R.; Rajagopal, E. N.

    2016-05-01

    Last decade has seen a tremendous improvement in the forecasting skill of numerical weather prediction (NWP) models. This is attributed to increased sophistication in NWP models, which resolve complex physical processes, advanced data assimilation, increased grid resolution and satellite observations. However, prediction of heavy rains is still a challenge since the models exhibit large error in amounts as well as spatial and temporal distribution. Two state-of-art NWP models have been investigated over the Indian monsoon region to assess their ability in predicting the heavy rainfall events. The unified model operational at National Center for Medium Range Weather Forecasting (NCUM) and the unified model operational at the Australian Bureau of Meteorology (Australian Community Climate and Earth-System Simulator -- Global (ACCESS-G)) are used in this study. The recent (JJAS 2015) Indian monsoon season witnessed 6 depressions and 2 cyclonic storms which resulted in heavy rains and flooding. The CRA method of verification allows the decomposition of forecast errors in terms of error in the rainfall volume, pattern and location. The case by case study using CRA technique shows that contribution to the rainfall errors come from pattern and displacement is large while contribution due to error in predicted rainfall volume is least.

  15. Automated Slide Scanning and Segmentation in Fluorescently-labeled Tissues Using a Widefield High-content Analysis System.

    PubMed

    Poon, Candice C; Ebacher, Vincent; Liu, Katherine; Yong, Voon Wee; Kelly, John James Patrick

    2018-05-03

    Automated slide scanning and segmentation of fluorescently-labeled tissues is the most efficient way to analyze whole slides or large tissue sections. Unfortunately, many researchers spend large amounts of time and resources developing and optimizing workflows that are only relevant to their own experiments. In this article, we describe a protocol that can be used by those with access to a widefield high-content analysis system (WHCAS) to image any slide-mounted tissue, with options for customization within pre-built modules found in the associated software. Not originally intended for slide scanning, the steps detailed in this article make it possible to acquire slide scanning images in the WHCAS which can be imported into the associated software. In this example, the automated segmentation of brain tumor slides is demonstrated, but the automated segmentation of any fluorescently-labeled nuclear or cytoplasmic marker is possible. Furthermore, there are a variety of other quantitative software modules including assays for protein localization/translocation, cellular proliferation/viability/apoptosis, and angiogenesis that can be run. This technique will save researchers time and effort and create an automated protocol for slide analysis.

  16. A comparison of database systems for XML-type data.

    PubMed

    Risse, Judith E; Leunissen, Jack A M

    2010-01-01

    In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.

  17. Web-Based Consumer Health Information: Public Access, Digital Division, and Remainders

    PubMed Central

    Lorence, Daniel; Park, Heeyoung

    2006-01-01

    Public access Internet portals and decreasing costs of personal computers have created a growing consensus that unequal access to information, or a “digital divide,” has largely disappeared for US consumers. A series of technology initiatives in the late 1990s were believed to have largely eliminated the divide. For healthcare patients, access to information is an essential part of the consumer-centric framework outlined in the recently proposed national health information initiative. Data from a recent study of health information-seeking behaviors on the Internet suggest that a “digitally underserved group” persists, effectively limiting the planned national health information infrastructure to wealthier Americans. PMID:16926743

  18. Consumption with Large Sip Sizes Increases Food Intake and Leads to Underestimation of the Amount Consumed

    PubMed Central

    Bolhuis, Dieuwerke P.; Lakemond, Catriona M. M.; de Wijk, Rene A.; Luning, Pieternel A.; de Graaf, Cees

    2013-01-01

    Background A number of studies have shown that bite and sip sizes influence the amount of food intake. Consuming with small sips instead of large sips means relatively more sips for the same amount of food to be consumed; people may believe that intake is higher which leads to faster satiation. This effect may be disturbed when people are distracted. Objective The objective of the study is to assess the effects of sip size in a focused state and a distracted state on ad libitum intake and on the estimated amount consumed. Design In this 3×2 cross-over design, 53 healthy subjects consumed ad libitum soup with small sips (5 g, 60 g/min), large sips (15 g, 60 g/min), and free sips (where sip size was determined by subjects themselves), in both a distracted and focused state. Sips were administered via a pump. There were no visual cues toward consumption. Subjects then estimated how much they had consumed by filling soup in soup bowls. Results Intake in the small-sip condition was ∼30% lower than in both the large-sip and free-sip conditions (P<0.001). In addition, subjects underestimated how much they had consumed in the large-sip and free-sip conditions (P<0.03). Distraction led to a general increase in food intake (P = 0.003), independent of sip size. Distraction did not influence sip size or estimations. Conclusions Consumption with large sips led to higher food intake, as expected. Large sips, that were either fixed or chosen by subjects themselves led to underestimations of the amount consumed. This may be a risk factor for over-consumption. Reducing sip or bite sizes may successfully lower food intake, even in a distracted state. PMID:23372657

  19. Sewage Management Changes in the North-eastern Poland After Accession to the European Union

    NASA Astrophysics Data System (ADS)

    Skarżyński, Szymon; Bartkowska, Izabela

    2018-02-01

    Poland's accession to the European Union contributed to the infrastructure development of the whole country. One of the elements of the modernized infrastructure is the sewage network and facilities on this network, as well as facilities for waste water treatment and disposal of sludge. A wide stream of funds flowing to the country, and consequently also to the north-eastern polish voivodeships (Podlaskie, Warmian-Masurian, Lublin), allowed modernization, organize, and sometimes to build a new sewage management of this part of the country. The main factors and parameters that allow us to evaluate the development of the sewage management in north-eastern Poland are included: percentage of population using sewage treatment plants, number of municipal sewage plants with the division of their type, number of industrial plants, number of septic tanks, amount of sewage purified in a year, amount of sludge produced in the year, design capacity of sewage treatment plant, size of plant in population equivalent (PE). From a number of investments in the field of wastewater management carried out in the discussed area in the period after Poland's accession to the European Union, 9 investments were considered the most important, 3 from each of the voivodeships.

  20. Model for fluorescence quenching in light harvesting complex II in different aggregation states.

    PubMed

    Andreeva, Atanaska; Abarova, Silvia; Stoitchkova, Katerina; Busheva, Mira

    2009-02-01

    Low-temperature (77 K) steady-state fluorescence emission spectroscopy and dynamic light scattering were applied to the main chlorophyll a/b protein light harvesting complex of photosystem II (LHC II) in different aggregation states to elucidate the mechanism of fluorescence quenching within LHC II oligomers. Evidences presented that LHC II oligomers are heterogeneous and consist of large and small particles with different fluorescence yield. At intermediate detergent concentrations the mean size of the small particles is similar to that of trimers, while the size of large particles is comparable to that of aggregated trimers without added detergent. It is suggested that in small particles and trimers the emitter is monomeric chlorophyll, whereas in large aggregates there is also another emitter, which is a poorly fluorescing chlorophyll associate. A model, describing populations of antenna chlorophyll molecules in small and large aggregates in their ground and first singlet excited states, is considered. The model enables us to obtain the ratio of the singlet excited-state lifetimes in small and large particles, the relative amount of chlorophyll molecules in large particles, and the amount of quenchers as a function of the degree of aggregation. These dependencies reveal that the quenching of the chl a fluorescence upon aggregation is due to the formation of large aggregates and the increasing of the amount of chlorophyll molecules forming these aggregates. As a consequence, the amount of quenchers, located in large aggregates, is increased, and their singlet excited-state lifetimes steeply decrease.

  1. The Fermilab Accelerator control system

    NASA Astrophysics Data System (ADS)

    Bogert, Dixon

    1986-06-01

    With the advent of the Tevatron, considerable upgrades have been made to the controls of all the Fermilab Accelerators. The current system is based on making as large an amount of data as possible available to many operators or end-users. Specifically there are about 100 000 separate readings, settings, and status and control registers in the various machines, all of which can be accessed by seventeen consoles, some in the Main Control Room and others distributed throughout the complex. A "Host" computer network of approximately eighteen PDP-11/34's, seven PDP-11/44's, and three VAX-11/785's supports a distributed data acquisition system including Lockheed MAC-16's left from the original Main Ring and Booster instrumentation and upwards of 1000 Z80, Z8002, and M68000 microprocessors in dozens of configurations. Interaction of the various parts of the system is via a central data base stored on the disk of one of the VAXes. The primary computer-hardware communication is via CAMAC for the new Tevatron and Antiproton Source; certain subsystems, among them vacuum, refrigeration, and quench protection, reside in the distributed microprocessors and communicate via GAS, an in-house protocol. An important hardware feature is an accurate clock system making a large number of encoded "events" in the accelerator supercycle available for both hardware modules and computers. System software features include the ability to save the current state of the machine or any subsystem and later restore it or compare it with the state at another time, a general logging facility to keep track of specific variables over long periods of time, detection of "exception conditions" and the posting of alarms, and a central filesharing capability in which files on VAX disks are available for access by any of the "Host" processors.

  2. Effects of activity, genetic selection and their interaction on muscle metabolic capacities and organ masses in mice.

    PubMed

    Kelly, Scott A; Gomes, Fernando R; Kolb, Erik M; Malisch, Jessica L; Garland, Theodore

    2017-03-15

    Chronic voluntary exercise elevates total daily energy expenditure and food consumption, potentially resulting in organ compensation supporting nutrient extraction/utilization. Additionally, species with naturally higher daily energy expenditure often have larger processing organs, which may represent genetic differences and/or phenotypic plasticity. We tested for possible adaptive changes in organ masses of four replicate lines of house mice selected (37 generations) for high running (HR) compared with four non-selected control (C) lines. Females were housed with or without wheel access for 13-14 weeks beginning at 53-60 days of age. In addition to organ compensation, chronic activity may also require an elevated aerobic capacity. Therefore, we also measured hematocrit and both citrate synthase activity and myoglobin concentration in heart and gastrocnemius. Both selection (HR versus C) and activity (wheels versus no wheels) significantly affected morphological and biochemical traits. For example, with body mass as a covariate, mice from HR lines had significantly higher hematocrit and larger ventricles, with more myoglobin. Wheel access lengthened the small intestine, increased relative ventricle and kidney size, and increased skeletal muscle citrate synthase activity and myoglobin concentration. As compared with C lines, HR mice had greater training effects for ventricle mass, hematocrit, large intestine length and gastrocnemius citrate synthase activity. For ventricle and gastrocnemius citrate synthase activity, the greater training was quantitatively explainable as a result of greater wheel running (i.e. 'more pain, more gain'). For hematocrit and large intestine length, differences were not related to amount of wheel running and instead indicate inherently greater adaptive plasticity in HR lines. © 2017. Published by The Company of Biologists Ltd.

  3. Geospatial Data as a Service: Towards planetary scale real-time analytics

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Larraondo, P. R.; Antony, J.; Richards, C. J.

    2017-12-01

    The rapid growth of earth systems, environmental and geophysical datasets poses a challenge to both end-users and infrastructure providers. For infrastructure and data providers, tasks like managing, indexing and storing large collections of geospatial data needs to take into consideration the various use cases by which consumers will want to access and use the data. Considerable investment has been made by the Earth Science community to produce suitable real-time analytics platforms for geospatial data. There are currently different interfaces that have been defined to provide data services. Unfortunately, there is considerable difference on the standards, protocols or data models which have been designed to target specific communities or working groups. The Australian National University's National Computational Infrastructure (NCI) is used for a wide range of activities in the geospatial community. Earth observations, climate and weather forecasting are examples of these communities which generate large amounts of geospatial data. The NCI has been carrying out significant effort to develop a data and services model that enables the cross-disciplinary use of data. Recent developments in cloud and distributed computing provide a publicly accessible platform where new infrastructures can be built. One of the key components these technologies offer is the possibility of having "limitless" compute power next to where the data is stored. This model is rapidly transforming data delivery from centralised monolithic services towards ubiquitous distributed services that scale up and down adapting to fluctuations in the demand. NCI has developed GSKY, a scalable, distributed server which presents a new approach for geospatial data discovery and delivery based on OGC standards. We will present the architecture and motivating use-cases that drove GSKY's collaborative design, development and production deployment. We show our approach offers the community valuable exploratory analysis capabilities, for dealing with petabyte-scale geospatial data collections.

  4. Integrating Infrastructure and Institutions for Water Security in Large Urban Areas

    NASA Astrophysics Data System (ADS)

    Padowski, J.; Jawitz, J. W.; Carrera, L.

    2015-12-01

    Urban growth has forced cities to procure more freshwater to meet demands; however the relationship between urban water security, water availability and water management is not well understood. This work quantifies the urban water security of 108 large cities in the United States (n=50) and Africa (n=58) based on their hydrologic, hydraulic and institutional settings. Using publicly available data, urban water availability was estimated as the volume of water available from local water resources and those captured via hydraulic infrastructure (e.g. reservoirs, wellfields, aqueducts) while urban water institutions were assessed according to their ability to deliver, supply and regulate water resources to cities. When assessing availability, cities relying on local water resources comprised a minority (37%) of those assessed. The majority of cities (55%) instead rely on captured water to meet urban demands, with African cities reaching farther and accessing a greater number and variety of sources for water supply than US cities. Cities using captured water generally had poorer access to local water resources and maintained significantly more complex strategies for water delivery, supply and regulatory management. Eight cities, all African, are identified in this work as having water insecurity issues. These cities lack sufficient infrastructure and institutional complexity to capture and deliver adequate amounts of water for urban use. Together, these findings highlight the important interconnection between infrastructure investments and management techniques for urban areas with a limited or dwindling natural abundance of water. Addressing water security challenges in the future will require that more attention be placed not only on increasing water availability, but on developing the institutional support to manage captured water supplies.

  5. D and D Knowledge Management Information Tool - 2012 - 12106

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhyay, H.; Lagos, L.; Quintero, W.

    2012-07-01

    Deactivation and decommissioning (D and D) work is a high priority activity across the Department of Energy (DOE) complex. Subject matter specialists (SMS) associated with the different ALARA (As-Low-As-Reasonably-Achievable) Centers, DOE sites, Energy Facility Contractors Group (EFCOG) and the D and D community have gained extensive knowledge and experience over the years in the cleanup of the legacy waste from the Manhattan Project. To prevent the D and D knowledge and expertise from being lost over time from the evolving and aging workforce, DOE and the Applied Research Center (ARC) at Florida International University (FIU) proposed to capture and maintainmore » this valuable information in a universally available and easily usable system. D and D KM-IT provides single point access to all D and D related activities through its knowledge base. It is a community driven system. D and D KM-IT makes D and D knowledge available to the people who need it at the time they need it and in a readily usable format. It uses the World Wide Web as the primary source for content in addition to information collected from subject matter specialists and the D and D community. It brings information in real time through web based custom search processes and its dynamic knowledge repository. Future developments include developing a document library, providing D and D information access on mobile devices for the Technology module and Hotline, and coordinating multiple subject matter specialists to support the Hotline. The goal is to deploy a high-end sophisticated and secured system to serve as a single large knowledge base for all the D and D activities. The system consolidates a large amount of information available on the web and presents it to users in the simplest way possible. (authors)« less

  6. Beyond Open Data: the importance of data standards and interoperability - Experiences from ECMWF's Open Data Week

    NASA Astrophysics Data System (ADS)

    Wagemann, Julia; Siemen, Stephan

    2017-04-01

    The European Centre for Medium-Range Weather Forecasts (ECMWF) has been providing an increasing amount of data to the public. One of the most widely used datasets include the global climate reanalyses (e.g. ERA-interim) and atmospheric composition data, which are available to the public free of charge. The centre is further operating, on behalf of the European Commission, two Copernicus Services, the Copernicus Atmosphere Monitoring Service (CAMS) and Climate Change Service (C3S), which are making up-to-date environmental information freely available for scientists, policy makers and businesses. However, to fully benefit from open data, large environmental datasets also have to be easily accessible in a standardised, machine-readable format. Traditional data centres, such as ECMWF, currently face challenges in providing interoperable standardised access to increasingly large and complex datasets for scientists and industry. Therefore, ECMWF put open data in the spotlight during a week of events in March 2017 exploring the potential of freely available weather- and climate-related data and to review technological solutions serving these data. Key events included a Workshop on Meteorological Operational Systems (MOS) and a two-day hackathon. The MOS workshop aimed at reviewing technologies and practices to ensure efficient (open) data processing and provision. The hackathon focused on exploring creative uses of open environmental data and to see how open data is beneficial for various industries. The presentation aims to give a review of the outcomes and conclusions of the Open Data Week at ECMWF. A specific focus will be set on the importance of data standards and web services to make open environmental data a success. The presentation overall examines the opportunities and challenges of open environmental data from a data provider's perspective.

  7. September 2013 Storm and Flood Assessment Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walterscheid, J. C.

    2015-12-21

    Between September 10 and 17, 2013, New Mexico and Colorado received a historically large amount of precipitation (Figure 1). This report assesses the damage caused by flooding along with estimated costs to repair the damage at Los Alamos National Laboratory (the Laboratory) on the Pajarito Plateau. Los Alamos County, New Mexico, received between 200% and 600% of the normal precipitation for this time period (Figure 2), and the Laboratory received approximately 450% percent of its average precipitation for September (Figure 3). As a result, the Laboratory was inundated with rain, including the extremely large, greater-than-1000-yr return period event that occurredmore » between September 12 and 13 (Table 1). With saturated antecedent soil conditions from the September 10 storm, when the September 12 to September 13 storm hit, the flooding was disastrous to the Laboratory’s environmental infrastructure, including access roads, gage stations, watershed controls, control measures installed under the National Pollutant Discharge Elimination System Permit (hereafter, the Individual Permit), and groundwater monitoring wells (Figures 4 through 21). From September 16 to October 1, 2013, the Laboratory completed field assessments of environmental infrastructure and generated descriptions and estimates of the damage, which are presented in spreadsheets in Attachments 1 to 4 of this report. Section 2 of this report contains damage assessments by watershed, including access roads, gage stations, watershed controls, and control measures installed under the Individual Permit. Section 3 contains damage assessments of monitoring wells by the groundwater monitoring groups as established in the Interim Facility-Wide Groundwater Monitoring Plan for Monitoring Year 2014. Section 4 addresses damage and loss of automated samplers. Section 5 addresses sediment sampling needs, and Section 6 is the summary of estimated recovery costs from the significant rain and flooding during September 2013.« less

  8. Realising the technological promise of smartphones in addiction research and treatment: An ethical review.

    PubMed

    Capon, Hannah; Hall, Wayne; Fry, Craig; Carter, Adrian

    2016-10-01

    Smartphone technologies and mHealth applications (or apps) promise unprecedented scope for data collection, treatment intervention, and relapse prevention when used in the field of substance abuse and addiction. This potential also raises new ethical challenges that researchers, clinicians, and software developers must address. This paper aims to identify ethical issues in the current uses of smartphones in addiction research and treatment. A search of three databases (PubMed, Web of Science and PsycInfo) identified 33 studies involving smartphones or mHealth applications for use in the research and treatment of substance abuse and addiction. A content analysis was conducted to identify how smartphones are being used in these fields and to highlight the ethical issues raised by these studies. Smartphones are being used to collect large amounts of sensitive information, including personal information, geo-location, physiological activity, self-reports of mood and cravings, and the consumption of illicit drugs, alcohol and nicotine. Given that detailed information is being collected about potentially illegal behaviour, we identified the following ethical considerations: protecting user privacy, maximising equity in access, ensuring informed consent, providing participants with adequate clinical resources, communicating clinically relevant results to individuals, and the urgent need to demonstrate evidence of safety and efficacy of the technologies. mHealth technology offers the possibility to collect large amounts of valuable personal information that may enhance research and treatment of substance abuse and addiction. To realise this potential researchers, clinicians and app-developers must address these ethical concerns to maximise the benefits and minimise risks of harm to users. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Estimation of the effective heating systems radius as a method of the reliability improving and energy efficiency

    NASA Astrophysics Data System (ADS)

    Akhmetova, I. G.; Chichirova, N. D.

    2017-11-01

    When conducting an energy survey of heat supply enterprise operating several boilers located not far from each other, it is advisable to assess the degree of heat supply efficiency from individual boiler, the possibility of energy consumption reducing in the whole enterprise by switching consumers to a more efficient source, to close in effective boilers. It is necessary to consider the temporal dynamics of perspective load connection, conditions in the market changes. To solve this problem the radius calculation of the effective heat supply from the thermal energy source can be used. The disadvantage of existing methods is the high complexity, the need to collect large amounts of source data and conduct a significant amount of computational efforts. When conducting an energy survey of heat supply enterprise operating a large number of thermal energy sources, rapid assessment of the magnitude of the effective heating radius requires. Taking into account the specifics of conduct and objectives of the energy survey method of calculation of effective heating systems radius, to use while conducting the energy audit should be based on data available heat supply organization in open access, minimize efforts, but the result should be to match the results obtained by other methods. To determine the efficiency radius of Kazan heat supply system were determined share of cost for generation and transmission of thermal energy, capital investment to connect new consumers. The result were compared with the values obtained with the previously known methods. The suggested Express-method allows to determine the effective radius of the centralized heat supply from heat sources, in conducting energy audits with the effort minimum and the required accuracy.

  10. Cannabinoid Disposition After Human Intraperitoneal Use: An Insight Into Intraperitoneal Pharmacokinetic Properties in Metastatic Cancer.

    PubMed

    Lucas, Catherine J; Galettis, Peter; Song, Shuzhen; Solowij, Nadia; Reuter, Stephanie E; Schneider, Jennifer; Martin, Jennifer H

    2018-01-06

    Medicinal cannabis is prescribed under the provision of a controlled drug in the Australian Poisons Standard. However, multiple laws must be navigated in order for patients to obtain access and imported products can be expensive. Dose-response information for both efficacy and toxicity pertaining to medicinal cannabis is lacking. The pharmacokinetic properties of cannabis administered by traditional routes has been described but to date, there is no literature on the pharmacokinetic properties of an intraperitoneal cannabinoid emulsion. A cachectic 56-year-old female with stage IV ovarian cancer and peritoneal metastases presented to hospital with fevers, abdominal distension and severe pain, vomiting, anorexia, dehydration and confusion. The patient reported receiving an intraperitoneal injection, purported to contain 12 g of mixed cannabinoid (administered by a deregistered medical practitioner) two days prior to presentation. Additionally, cannabis oil oral capsules were administered in the hours prior to hospital admission. THC concentrations were consistent with the clinical state but not with the known pharmacokinetic properties of cannabis nor of intraperitoneal absorption. THC concentrations at the time of presentation were predicted to be ~60 ng/mL. Evidence suggests that blood THC concentrations >5 ng/mL are associated with substantial cognitive and psychomotor impairment. The predicted time for concentrations to drop <5 ng/mL was 49 days after administration. The unusual pharmacokinetic properties of the case suggest that there is a large amount unknown about cannabis pharmacokinetic properties. The pharmacokinetic properties of a large amount of a lipid soluble compound given intraperitoneally gave insights into the absorption and distribution of cannabinoids, particularly in the setting of metastatic malignancy. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.

  11. Impact of relativistic effects on cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Lorenz, Christiane S.; Alonso, David; Ferreira, Pedro G.

    2018-01-01

    Future surveys will access large volumes of space and hence very long wavelength fluctuations of the matter density and gravitational field. It has been argued that the set of secondary effects that affect the galaxy distribution, relativistic in nature, will bring new, complementary cosmological constraints. We study this claim in detail by focusing on a subset of wide-area future surveys: Stage-4 cosmic microwave background experiments and photometric redshift surveys. In particular, we look at the magnification lensing contribution to galaxy clustering and general-relativistic corrections to all observables. We quantify the amount of information encoded in these effects in terms of the tightening of the final cosmological constraints as well as the potential bias in inferred parameters associated with neglecting them. We do so for a wide range of cosmological parameters, covering neutrino masses, standard dark-energy parametrizations and scalar-tensor gravity theories. Our results show that, while the effect of lensing magnification to number counts does not contain a significant amount of information when galaxy clustering is combined with cosmic shear measurements, this contribution does play a significant role in biasing estimates on a host of parameter families if unaccounted for. Since the amplitude of the magnification term is controlled by the slope of the source number counts with apparent magnitude, s (z ), we also estimate the accuracy to which this quantity must be known to avoid systematic parameter biases, finding that future surveys will need to determine s (z ) to the ˜5 %- 10 % level. On the contrary, large-scale general-relativistic corrections are irrelevant both in terms of information content and parameter bias for most cosmological parameters but significant for the level of primordial non-Gaussianity.

  12. Predicting Classifier Performance with Limited Training Data: Applications to Computer-Aided Diagnosis in Breast and Prostate Cancer

    PubMed Central

    Basavanhally, Ajay; Viswanath, Satish; Madabhushi, Anant

    2015-01-01

    Clinical trials increasingly employ medical imaging data in conjunction with supervised classifiers, where the latter require large amounts of training data to accurately model the system. Yet, a classifier selected at the start of the trial based on smaller and more accessible datasets may yield inaccurate and unstable classification performance. In this paper, we aim to address two common concerns in classifier selection for clinical trials: (1) predicting expected classifier performance for large datasets based on error rates calculated from smaller datasets and (2) the selection of appropriate classifiers based on expected performance for larger datasets. We present a framework for comparative evaluation of classifiers using only limited amounts of training data by using random repeated sampling (RRS) in conjunction with a cross-validation sampling strategy. Extrapolated error rates are subsequently validated via comparison with leave-one-out cross-validation performed on a larger dataset. The ability to predict error rates as dataset size increases is demonstrated on both synthetic data as well as three different computational imaging tasks: detecting cancerous image regions in prostate histopathology, differentiating high and low grade cancer in breast histopathology, and detecting cancerous metavoxels in prostate magnetic resonance spectroscopy. For each task, the relationships between 3 distinct classifiers (k-nearest neighbor, naive Bayes, Support Vector Machine) are explored. Further quantitative evaluation in terms of interquartile range (IQR) suggests that our approach consistently yields error rates with lower variability (mean IQRs of 0.0070, 0.0127, and 0.0140) than a traditional RRS approach (mean IQRs of 0.0297, 0.0779, and 0.305) that does not employ cross-validation sampling for all three datasets. PMID:25993029

  13. Real-time face and gesture analysis for human-robot interaction

    NASA Astrophysics Data System (ADS)

    Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd

    2010-05-01

    Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.

  14. Does informal care impact utilization of healthcare services? Evidence from a longitudinal study of stroke patients.

    PubMed

    Torbica, Aleksandra; Calciolari, Stefano; Fattore, Giovanni

    2015-01-01

    Understanding the interplay between informal care and formal healthcare is important because it sheds light on the financial implications of such interactions and may result in different policies. On the basis of a major database on 532 Italian stroke patients enrolled in the period 2007-2008, we investigate whether the presence of a potential caregiver and the amount of informal care provided influences the use and the costs of healthcare services, and in particular rehabilitation, in the post-acute phase. Primary caregivers of stroke patients were interviewed at 3, 6 and 12 months after the acute event and use of healthcare and informal care were documented. The panel dataset included socio-demographic, clinical and economic data on patients and caregivers. A longitudinal log-linear model was applied to test the impact of informal care on total healthcare costs in the observation period. A double hurdle model was used to investigate the impact of informal care on rehabilitation costs. A total of 476 of stroke survivors in 44 hospitals were enrolled in the study and presence of informal caregiver was reported in approximately 50% of the sample (range 48.2-52.5% across the three periods). Healthcare costs at 12 months after the acute event are €5825 per patient, with rehabilitation costs amounting to €3985 (68.4%). Healthcare costs are significantly different between the patients with and without caregiver in all three periods. The presence of the caregiver is associated with 54.7% increase in direct healthcare costs (p < 0.01). Instead, the amount of informal care provided does not influence significantly direct healthcare costs. The presence of caregiver significantly increases the probability of access to rehabilitation services (β = 0.648, p = 0.039) while, once the decision on access is made, it doesn't influence the amount of services used. Our results suggest that informal caregivers facilitate or even promote the access to healthcare services. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A Low Collision and High Throughput Data Collection Mechanism for Large-Scale Super Dense Wireless Sensor Networks.

    PubMed

    Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Gaura, Elena; Brusey, James; Zhang, Xuekun; Dutkiewicz, Eryk

    2016-07-18

    Super dense wireless sensor networks (WSNs) have become popular with the development of Internet of Things (IoT), Machine-to-Machine (M2M) communications and Vehicular-to-Vehicular (V2V) networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.

  16. Methamphetamine drinking microstructure in mice bred to drink high or low amounts of methamphetamine

    PubMed Central

    Eastwood, Emily C.; Barkley-Levenson, Amanda M.; Phillips, Tamara J.

    2014-01-01

    Genetic factors likely influence individual sensitivity to positive and negative effects of methamphetamine (MA) and risk for MA dependence. Genetic influence on MA consumption has been confirmed by selectively breeding mouse lines to consume high (MAHDR) or low (MALDR) amounts of MA, using a two-bottle choice MA drinking (MADR) procedure. Here, we employed a lickometer system to characterize the microstructure of MA (20, 40, and 80 mg/l) and water intake in MAHDR and MALDR mice in 4-h limited access sessions, during the initial 4 hours of the dark phase of their 12:12 h light:dark cycle. Licks at one-minute intervals and total volume consumed were recorded, and bout analysis was performed. MAHDR and MALDR mice consumed similar amounts of MA in mg/kg on the first day of access, but MAHDR mice consumed significantly more MA than MALDR mice during all subsequent sessions. The higher MA intake of MAHDR mice was associated with a larger number of MA bouts, longer bout duration, shorter interbout interval, and shorter latency to the first bout. In a separate 4-h limited access MA drinking study, MALDR and MAHDR mice had similar blood MA levels on the first day MA was offered, but MAHDR mice had higher blood MA levels on all subsequent days, which corresponded with MA intake. These data provide insight into the microstructure of MA intake in an animal model of differential genetic risk for MA consumption, which may be pertinent to MA use patterns relevant to genetic risk for MA dependence. PMID:24978098

  17. Information Power Grid Posters

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    2003-01-01

    This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.

  18. A study of institutional spending on open access publication fees in Germany.

    PubMed

    Jahn, Najko; Tullney, Marco

    2016-01-01

    Publication fees as a revenue source for open access publishing hold a prominent place on the agendas of researchers, policy makers, and academic publishers. This study contributes to the evolving empirical basis for funding these charges and examines how much German universities and research organisations spent on open access publication fees. Using self-reported cost data from the Open APC initiative, the analysis focused on the amount that was being spent on publication fees, and compared these expenditure with data from related Austrian (FWF) and UK (Wellcome Trust, Jisc) initiatives, in terms of both size and the proportion of articles being published in fully and hybrid open access journals. We also investigated how thoroughly self-reported articles were indexed in Crossref, a DOI minting agency for scholarly literature, and analysed how the institutional spending was distributed across publishers and journal titles. According to self-reported data from 30 German universities and research organisations between 2005 and 2015, expenditures on open access publication fees increased over the years in Germany and amounted to € 9,627,537 for 7,417 open access journal articles. The average payment was € 1,298, and the median was € 1,231. A total of 94% of the total article volume included in the study was supported in accordance with the price cap of € 2,000, a limit imposed by the Deutsche Forschungsgemeinschaft (DFG) as part of its funding activities for open access funding at German universities. Expenditures varied considerably at the institutional level. There were also differences in how much the institutions spent per journal and publisher. These differences reflect, at least in part, the varying pricing schemes in place including discounted publication fees. With an indexing coverage of 99%, Crossref thoroughly indexed the open access journals articles included in the study. A comparison with the related openly available cost data from Austria and the UK revealed that German universities and research organisations primarily funded articles in fully open access journals. By contrast, articles in hybrid journal accounted for the largest share of spending according to the Austrian and UK data. Fees paid for hybrid journals were on average more expensive than those paid for fully open access journals.

  19. A study of institutional spending on open access publication fees in Germany

    PubMed Central

    Tullney, Marco

    2016-01-01

    Publication fees as a revenue source for open access publishing hold a prominent place on the agendas of researchers, policy makers, and academic publishers. This study contributes to the evolving empirical basis for funding these charges and examines how much German universities and research organisations spent on open access publication fees. Using self-reported cost data from the Open APC initiative, the analysis focused on the amount that was being spent on publication fees, and compared these expenditure with data from related Austrian (FWF) and UK (Wellcome Trust, Jisc) initiatives, in terms of both size and the proportion of articles being published in fully and hybrid open access journals. We also investigated how thoroughly self-reported articles were indexed in Crossref, a DOI minting agency for scholarly literature, and analysed how the institutional spending was distributed across publishers and journal titles. According to self-reported data from 30 German universities and research organisations between 2005 and 2015, expenditures on open access publication fees increased over the years in Germany and amounted to € 9,627,537 for 7,417 open access journal articles. The average payment was € 1,298, and the median was € 1,231. A total of 94% of the total article volume included in the study was supported in accordance with the price cap of € 2,000, a limit imposed by the Deutsche Forschungsgemeinschaft (DFG) as part of its funding activities for open access funding at German universities. Expenditures varied considerably at the institutional level. There were also differences in how much the institutions spent per journal and publisher. These differences reflect, at least in part, the varying pricing schemes in place including discounted publication fees. With an indexing coverage of 99%, Crossref thoroughly indexed the open access journals articles included in the study. A comparison with the related openly available cost data from Austria and the UK revealed that German universities and research organisations primarily funded articles in fully open access journals. By contrast, articles in hybrid journal accounted for the largest share of spending according to the Austrian and UK data. Fees paid for hybrid journals were on average more expensive than those paid for fully open access journals. PMID:27602289

  20. Unauthorized Disclosure: Can Behavioral Indicators Help Predict Who Will Commit Unauthorized Disclosure of Classified National Security Information?

    DTIC Science & Technology

    2015-06-01

    Katherine Herbig, Espionage by the Numbers: A Statistical Overview, accessed April 14, 2015, http://www.wright.edu/rsp/Security/Treason/Numbers.htm 5...submitted for top-secret clearances with “derogatory financial information.”43 The debt amount reviewed was $500 in delinquency for at least 120 days, which...Investigative Service’s (DIS) “ delinquent debt criteria with amount of delinquent debt” and Defense Central Index of Investigation’s (DCII) final

Top