ERIC Educational Resources Information Center
Lehman, Rosemary
2007-01-01
This chapter looks at the development and nature of learning objects, meta-tagging standards and taxonomies, learning object repositories, learning object repository characteristics, and types of learning object repositories, with type examples. (Contains 1 table.)
Interoperability Gap Challenges for Learning Object Repositories & Learning Management Systems
ERIC Educational Resources Information Center
Mason, Robert T.
2011-01-01
An interoperability gap exists between Learning Management Systems (LMSs) and Learning Object Repositories (LORs). Learning Objects (LOs) and the associated Learning Object Metadata (LOM) that is stored within LORs adhere to a variety of LOM standards. A common LOM standard found in LORs is the Sharable Content Object Reference Model (SCORM)…
History, Context, and Policies of a Learning Object Repository
ERIC Educational Resources Information Center
Simpson, Steven Marshall
2016-01-01
Learning object repositories, a form of digital libraries, are robust systems that provide educators new ways to search for educational resources, collaborate with peers, and provide instruction to students in unique and varied ways. This study examines a learning object repository created by a large suburban school district to increase teaching…
Exploring Characterizations of Learning Object Repositories Using Data Mining Techniques
NASA Astrophysics Data System (ADS)
Segura, Alejandra; Vidal, Christian; Menendez, Victor; Zapata, Alfredo; Prieto, Manuel
Learning object repositories provide a platform for the sharing of Web-based educational resources. As these repositories evolve independently, it is difficult for users to have a clear picture of the kind of contents they give access to. Metadata can be used to automatically extract a characterization of these resources by using machine learning techniques. This paper presents an exploratory study carried out in the contents of four public repositories that uses clustering and association rule mining algorithms to extract characterizations of repository contents. The results of the analysis include potential relationships between different attributes of learning objects that may be useful to gain an understanding of the kind of resources available and eventually develop search mechanisms that consider repository descriptions as a criteria in federated search.
ERIC Educational Resources Information Center
Caws, Catherine
2008-01-01
This paper discusses issues surrounding the development of a learning object repository (FLORE) for teaching and learning French at the postsecondary level. An evaluation based on qualitative and quantitative data was set up in order to better assess how second-language (L2) students in French perceived the integration of this new repository into…
Semantic Linking of Learning Object Repositories to DBpedia
ERIC Educational Resources Information Center
Lama, Manuel; Vidal, Juan C.; Otero-Garcia, Estefania; Bugarin, Alberto; Barro, Senen
2012-01-01
Large-sized repositories of learning objects (LOs) are difficult to create and also to maintain. In this paper we propose a way to reduce this drawback by improving the classification mechanisms of the LO repositories. Specifically, we present a solution to automate the LO classification of the Universia repository, a collection of more than 15…
FILILAB: Creation and Use of a Learning Object Repository for EFL
ERIC Educational Resources Information Center
Litzler, Mary Frances; Garcia Laborda, Jesus; Halbach, Ana
2012-01-01
Background: Students at the Universidad de Alcala need batteries of learning objects and exercises. Although student textbooks tend to include a wide range of additional exercises, students in advanced linguistics and language courses require learning objects to obtain additional practice. Online repositories offer excellent opportunities for…
An Assistant for Loading Learning Object Metadata: An Ontology Based Approach
ERIC Educational Resources Information Center
Casali, Ana; Deco, Claudia; Romano, Agustín; Tomé, Guillermo
2013-01-01
In the last years, the development of different Repositories of Learning Objects has been increased. Users can retrieve these resources for reuse and personalization through searches in web repositories. The importance of high quality metadata is key for a successful retrieval. Learning Objects are described with metadata usually in the standard…
NASA Astrophysics Data System (ADS)
Zschocke, Thomas; Beniest, Jan
The Consultative Group on International Agricultural Re- search (CGIAR) has established a digital repository to share its teaching and learning resources along with descriptive educational information based on the IEEE Learning Object Metadata (LOM) standard. As a critical component of any digital repository, quality metadata are critical not only to enable users to find more easily the resources they require, but also for the operation and interoperability of the repository itself. Studies show that repositories have difficulties in obtaining good quality metadata from their contributors, especially when this process involves many different stakeholders as is the case with the CGIAR as an international organization. To address this issue the CGIAR began investigating the Open ECBCheck as well as the ISO/IEC 19796-1 standard to establish quality protocols for its training. The paper highlights the implications and challenges posed by strengthening the metadata creation workflow for disseminating learning objects of the CGIAR.
Personal Spaces in Public Repositories as a Facilitator for Open Educational Resource Usage
ERIC Educational Resources Information Center
Cohen, Anat; Reisman, Sorel; Sperling, Barbra Bied
2015-01-01
Learning object repositories are a shared, open and public space; however, the possibility and ability of personal expression in an open, global, public space is crucial. The aim of this study is to explore personal spaces in a big learning object repository as a facilitator for adoption of Open Educational Resources (OER) into teaching practices…
ERIC Educational Resources Information Center
Yeni, Sabiha; Ozdener, Nesrin
2014-01-01
The purpose of the study is to investigate how pre-service teachers benefit from learning objects repositories while preparing course content. Qualitative and quantitative data collection methods were used in a mixed methods approach. This study was carried out with 74 teachers from the Faculty of Education. In the first phase of the study,…
Ontological Modeling of Educational Resources: A Proposed Implementation for Greek Schools
ERIC Educational Resources Information Center
Poulakakis, Yannis; Vassilakis, Kostas; Kalogiannakis, Michail; Panagiotakis, Spyros
2017-01-01
In eLearning context searching for suitable educational material is still a challenging issue. During the last two decades, various digital repositories, such as Learning Object Repositories, institutional repositories and latterly Open Educational Resources, have been developed to accommodate collections of learning material that can be used for…
Quality Assurance for Digital Learning Object Repositories: Issues for the Metadata Creation Process
ERIC Educational Resources Information Center
Currier, Sarah; Barton, Jane; O'Beirne, Ronan; Ryan, Ben
2004-01-01
Metadata enables users to find the resources they require, therefore it is an important component of any digital learning object repository. Much work has already been done within the learning technology community to assure metadata quality, focused on the development of metadata standards, specifications and vocabularies and their implementation…
RiPLE: Recommendation in Peer-Learning Environments Based on Knowledge Gaps and Interests
ERIC Educational Resources Information Center
Khosravi, Hassan; Kitto, Kirsty; Cooper, Kendra
2017-01-01
Various forms of Peer-Learning Environments are increasingly being used in post-secondary education, often to help build repositories of student generated learning objects. However, large classes can result in an extensive repository, which can make it more challenging for students to search for suitable objects that both reflect their interests…
Collaborative Learning Utilizing a Domain-Based Shared Data Repository to Enhance Learning Outcomes
ERIC Educational Resources Information Center
Lubliner, David; Widmeyer, George; Deek, Fadi P.
2009-01-01
The objective of this study was to determine whether there was a quantifiable improvement in learning outcomes by integrating course materials in a 4-year baccalaureate program, utilizing a knowledge repository with a conceptual map that spans a discipline. Two new models were developed to provide the framework for this knowledge repository. A…
ERIC Educational Resources Information Center
Mozelius, Peter; Hettiarachchi, Enosha
2012-01-01
This paper describes the iterative development process of a Learning Object Repository (LOR), named eNOSHA. Discussions on a project for a LOR started at the e-Learning Centre (eLC) at The University of Colombo, School of Computing (UCSC) in 2007. The eLC has during the last decade been developing learning content for a nationwide e-learning…
Designing Learning Object Repositories as Systems for Managing Educational Communities Knowledge
ERIC Educational Resources Information Center
Sampson, Demetrios G.; Zervas, Panagiotis
2013-01-01
Over the past years, a number of international initiatives that recognize the importance of sharing and reusing digital educational resources among educational communities through the use of Learning Object Repositories (LORs) have emerged. Typically, these initiatives focus on collecting digital educational resources that are offered by their…
ERIC Educational Resources Information Center
Downes, Stephen
2005-01-01
When compared with, say, blogging, the deployment of learning objects has been slow indeed. While blog aggregation services are recording millions of blogs and hundreds of millions of blog posts, academic learning object repositories number their resources only in the thousands, and even major corporate repositories have only one or two million…
ERIC Educational Resources Information Center
Yalcinalp, Serpil; Emiroglu, Bulent
2012-01-01
Although many developments have been made in the design and development of learning object repositories (LORs), the efficient use of such systems is still questionable. Without realising the functional use of such systems or considering the involvement of their dynamic users, these systems would probably become obsolete. This study includes both…
The Athabasca University eduSource Project: Building an Accessible Learning Object Repository
ERIC Educational Resources Information Center
Cleveland-Innes, Martha; McGreal, Rory; Anderson, Terry; Friesen, Norm; Ally, Mohamed; Tin, Tony; Graham, Rodger; Moisey, Susan; Petrinjak, Anita; Schafer, Steve
2005-01-01
Athabasca University--Canada's Open University (AU) made the commitment to put all of its courses online as part of its Strategic University Plan. In pursuit of this goal, AU participated in the eduSource project, a pan-Canadian effort to build the infrastructure for an interoperable network of learning object repositories. AU acted as a leader in…
Patterns of Learning Object Reuse in the Connexions Repository
ERIC Educational Resources Information Center
Duncan, S. M.
2009-01-01
Since the term "learning object" was first published, there has been either an explicit or implicit expectation of reuse. There has also been a lot of speculation about why learning objects are, or are not, reused. This study quantitatively examined the actual amount and type of learning object use, to include reuse, modification, and translation,…
The Effects of Using Learning Objects in Two Different Settings
ERIC Educational Resources Information Center
Cakiroglu, Unal; Baki, Adnan; Akkan, Yasar
2012-01-01
The study compared the effects of Learning Objects (LOs) within different applications; in classroom and in extracurricular activities. So in this study, firstly a Learning Object Repository (LOR) has been designed in parallel with 9th grade school mathematics curriculum. One of the two treatment groups was named as "classroom group" (n…
Hybrid Multiagent System for Automatic Object Learning Classification
NASA Astrophysics Data System (ADS)
Gil, Ana; de La Prieta, Fernando; López, Vivian F.
The rapid evolution within the context of e-learning is closely linked to international efforts on the standardization of learning object metadata, which provides learners in a web-based educational system with ubiquitous access to multiple distributed repositories. This article presents a hybrid agent-based architecture that enables the recovery of learning objects tagged in Learning Object Metadata (LOM) and provides individualized help with selecting learning materials to make the most suitable choice among many alternatives.
Challenges in Developing XML-Based Learning Repositories
NASA Astrophysics Data System (ADS)
Auksztol, Jerzy; Przechlewski, Tomasz
There is no doubt that modular design has many advantages, including the most important ones: reusability and cost-effectiveness. In an e-leaming community parlance the modules are determined as Learning Objects (LOs) [11]. An increasing amount of learning objects have been created and published online, several standards has been established and multiple repositories developed for them. For example Cisco Systems, Inc., "recognizes a need to move from creating and delivering large inflexible training courses, to database-driven objects that can be reused, searched, and modified independent of their delivery media" [6]. The learning object paradigm of education resources authoring is promoted mainly to reduce the cost of the content development and to increase its quality. A frequently used metaphor of Learning Objects paradigm compares them to Lego Logs or objects in Object-Oriented program design [25]. However a metaphor is only an abstract idea, which should be turned to something more concrete to be usable. The problem is that many papers on LOs end up solely in metaphors. In our opinion Lego or OO metaphors are gross oversimplificatation of the problem as there is much easier to develop Lego set or design objects in OO program than develop truly interoperable, context-free learning content1.
Learning Objects, Repositories, Sharing and Reusability
ERIC Educational Resources Information Center
Koppi, Tony; Bogle, Lisa; Bogle, Mike
2005-01-01
The online Learning Resource Catalogue (LRC) Project has been part of an international consortium for several years and currently includes 25 institutions worldwide. The LRC Project has evolved for several pragmatic reasons into an academic network whereby members can identify and share reusable learning objects as well as collaborate in a number…
ERIC Educational Resources Information Center
Wilhelm, Pierre; Wilde, Russ
2005-01-01
A course instructor and his assistant at Athabasca University investigated whether the process of transferring interoperable learning objects from online repositories facilitated course production, both pedagogically and economically. They examined the efficiency of the objects-assembly method from several perspectives while developing an online…
ERIC Educational Resources Information Center
Balatsoukas, Panos; O'Brien, Ann; Morris, Anne
2011-01-01
Introduction: This paper reports on the findings of a study investigating the potential effects of discipline (sciences and engineering versus humanities and social sciences) on the application of the Institute of Electrical and Electronic Engineers learning object metadata elements for the description of learning objects in the Jorum learning…
Model of Distributed Learning Objects Repository for a Heterogenic Internet Environment
ERIC Educational Resources Information Center
Kaczmarek, Jerzy; Landowska, Agnieszka
2006-01-01
In this article, an extension of the existing structure of learning objects is described. The solution addresses the problem of the access and discovery of educational resources in the distributed Internet environment. An overview of e-learning standards, reference models, and problems with educational resources delivery is presented. The paper…
iLOG: A Framework for Automatic Annotation of Learning Objects with Empirical Usage Metadata
ERIC Educational Resources Information Center
Miller, L. D.; Soh, Leen-Kiat; Samal, Ashok; Nugent, Gwen
2012-01-01
Learning objects (LOs) are digital or non-digital entities used for learning, education or training commonly stored in repositories searchable by their associated metadata. Unfortunately, based on the current standards, such metadata is often missing or incorrectly entered making search difficult or impossible. In this paper, we investigate…
A recommendation module to help teachers build courses through the Moodle Learning Management System
NASA Astrophysics Data System (ADS)
Limongelli, Carla; Lombardi, Matteo; Marani, Alessandro; Sciarrone, Filippo; Temperini, Marco
2016-01-01
In traditional e-learning, teachers design sets of Learning Objects (LOs) and organize their sequencing; the material implementing the LOs could be either built anew or adopted from elsewhere (e.g. from standard-compliant repositories) and reused. This task is applicable also when the teacher works in a system for personalized e-learning. In this case, the burden actually increases: for instance, the LOs may need adaptation to the system, through additional metadata. This paper presents a module that gives some support to the operations of retrieving, analyzing, and importing LOs from a set of standard Learning Objects Repositories, acting as a recommending system. In particular, it is designed to support the teacher in the phases of (i) retrieval of LOs, through a keyword-based search mechanism applied to the selected repositories; (ii) analysis of the returned LOs, whose information is enriched by a concept of relevance metric, based on both the results of the searching operation and the data related to the previous use of the LOs in the courses managed by the Learning Management System; and (iii) LO importation into the course under construction.
Discovery and Use of Online Learning Resources: Case Study Findings
ERIC Educational Resources Information Center
Recker, Mimi M.; Dorward, James; Nelson, Laurie Miller
2004-01-01
Much recent research and funding have focused on building Internet-based repositories that contain collections of high-quality learning resources, often called "learning objects." Yet little is known about how non-specialist users, in particular teachers, find, access, and use digital learning resources. To address this gap, this article…
Multiple Criteria Evaluation of Quality and Optimisation of e-Learning System Components
ERIC Educational Resources Information Center
Kurilovas, Eugenijus; Dagiene, Valentina
2010-01-01
The main research object of the paper is investigation and proposal of the comprehensive Learning Object Repositories (LORs) quality evaluation tool suitable for their multiple criteria decision analysis, evaluation and optimisation. Both LORs "internal quality" and "quality in use" evaluation (decision making) criteria are analysed in the paper.…
Ontology-Based Annotation of Learning Object Content
ERIC Educational Resources Information Center
Gasevic, Dragan; Jovanovic, Jelena; Devedzic, Vladan
2007-01-01
The paper proposes a framework for building ontology-aware learning object (LO) content. Previously ontologies were exclusively employed for enriching LOs' metadata. Although such an approach is useful, as it improves retrieval of relevant LOs from LO repositories, it does not enable one to reuse components of a LO, nor to incorporate an explicit…
ERIC Educational Resources Information Center
Karalar, Halit; Korucu, Agah Tugrul
2016-01-01
Although the Semantic Web offers many opportunities for learners, effects of it in the classroom is not well known. Therefore, in this study explanations have been stated as how the learning objects defined by means of using the terminology in a developed ontology and kept in objects repository should be presented to learners with the aim of…
ERIC Educational Resources Information Center
Holmes, Kathryn A.; Prieto-Rodriguez, Elena
2018-01-01
Higher education institutions routinely use Learning Management Systems (LMS) for multiple purposes; to organise coursework and assessment, to facilitate staff and student interactions, and to act as repositories of learning objects. The analysis reported here involves staff (n = 46) and student (n = 470) responses to surveys as well as data…
Analyzing Hidden Semantics in Social Bookmarking of Open Educational Resources
NASA Astrophysics Data System (ADS)
Minguillón, Julià
Web 2.0 services such as social bookmarking allow users to manage and share the links they find interesting, adding their own tags for describing them. This is especially interesting in the field of open educational resources, as delicious is a simple way to bridge the institutional point of view (i.e. learning object repositories) with the individual one (i.e. personal collections), thus promoting the discovering and sharing of such resources by other users. In this paper we propose a methodology for analyzing such tags in order to discover hidden semantics (i.e. taxonomies and vocabularies) that can be used to improve descriptions of learning objects and make learning object repositories more visible and discoverable. We propose the use of a simple statistical analysis tool such as principal component analysis to discover which tags create clusters that can be semantically interpreted. We will compare the obtained results with a collection of resources related to open educational resources, in order to better understand the real needs of people searching for open educational resources.
New directions in medical e-curricula and the use of digital repositories.
Fleiszer, David M; Posel, Nancy H; Steacy, Sean P
2004-03-01
Medical educators involved in the growth of multimedia-enhanced e-curricula are increasingly aware of the need for digital repositories to catalogue, store and ensure access to learning objects that are integrated within their online material. The experience at the Faculty of Medicine at McGill University during initial development of a mainstream electronic curriculum reflects this growing recognition that repositories can facilitate the development of a more comprehensive as well as effective electronic curricula. Also, digital repositories can help to ensure efficient utilization of resources through the use, re-use, and reprocessing of multimedia learning, addressing the potential for collaboration among repositories and increasing available material exponentially. The authors review different approaches to the development of a digital repository application, as well as global and specific issues that should be examined in the initial requirements definition and development phase, to ensure current initiatives meet long-term requirements. Often, decisions regarding creation of e-curricula and associated digital repositories are left to interested faculty and their individual development teams. However, the development of an e-curricula and digital repository is not predominantly a technical exercise, but rather one that affects global pedagogical strategies and curricular content and involves a commitment of large-scale resources. Outcomes of these decisions can have long-term consequences and as such, should involve faculty at the highest levels including the dean.
A Linked and Open Dataset from a Network of Learning Repositories on Organic Agriculture
ERIC Educational Resources Information Center
Rajabi, Enayat; Sanchez-Alonso, Salvador; Sicilia, Miguel-Angel; Manouselis, Nikos
2017-01-01
Exposing eLearning objects on the Web of Data leads to sharing and reusing of educational resources and improves the interoperability of data on the Web. Furthermore, it enriches e-learning content, as it is connected to other valuable resources using the Linked Data principles. This paper describes a study performed on the Organic.Edunet…
Learning Object Repositories in e-Learning: Challenges for Learners in Saudi Arabia
ERIC Educational Resources Information Center
AlMegren, Abdullah; Yassin, Siti Zuraiyni
2013-01-01
The advent of the millennium has seen the introduction of a new paradigm for ICT-enhanced education. Advances in ICT have led to the emergence of learning networks comprising people who want to discover and share various innovative technologies on a global scale. Over the past decade, there has been tremendous worldwide interest in the concept of…
Health Professional Learner Attitudes and Use of Digital Learning Resources
Chamberlain, Michael; Morrison, Shane; Kotsanas, George; Keating, Jennifer L; Ilic, Dragan
2013-01-01
Background Web-based digital repositories allow educational resources to be accessed efficiently and conveniently from diverse geographic locations, hold a variety of resource formats, enable interactive learning, and facilitate targeted access for the user. Unlike some other learning management systems (LMS), resources can be retrieved through search engines and meta-tagged labels, and content can be streamed, which is particularly useful for multimedia resources. Objective The aim of this study was to examine usage and user experiences of an online learning repository (Physeek) in a population of physiotherapy students. The secondary aim of this project was to examine how students prefer to access resources and which resources they find most helpful. Methods The following data were examined using an audit of the repository server: (1) number of online resources accessed per day in 2010, (2) number of each type of resource accessed, (3) number of resources accessed during business hours (9 am to 5 pm) and outside business hours (years 1-4), (4) session length of each log-on (years 1-4), and (5) video quality (bit rate) of each video accessed. An online questionnaire and 3 focus groups assessed student feedback and self-reported experiences of Physeek. Results Students preferred the support provided by Physeek to other sources of educational material primarily because of its efficiency. Peak usage commonly occurred at times of increased academic need (ie, examination times). Students perceived online repositories as a potential tool to support lifelong learning and health care delivery. Conclusions The results of this study indicate that today’s health professional students welcome the benefits of online learning resources because of their convenience and usability. This represents a transition away from traditional learning styles and toward technological learning support and may indicate a growing link between social immersions in Internet-based connections and learning styles. The true potential for Web-based resources to support student learning is as yet unknown. PMID:23324800
Integrating XQuery-Enabled SCORM XML Metadata Repositories into an RDF-Based E-Learning P2P Network
ERIC Educational Resources Information Center
Qu, Changtao; Nejdl, Wolfgang
2004-01-01
Edutella is an RDF-based E-Learning P2P network that is aimed to accommodate heterogeneous learning resource metadata repositories in a P2P manner and further facilitate the exchange of metadata between these repositories based on RDF. Whereas Edutella provides RDF metadata repositories with a quite natural integration approach, XML metadata…
Laboratory E-Notebooks: A Learning Object-Based Repository
ERIC Educational Resources Information Center
Abari, Ilior; Pierre, Samuel; Saliah-Hassane, Hamadou
2006-01-01
During distributed virtual laboratory experiment sessions, a major problem is to be able to collect, store, manage and share heterogeneous data (intermediate results, analysis, annotations, etc) manipulated simultaneously by geographically distributed teammates composing a virtual team. The electronic notebook is a possible response to this…
10 CFR 63.113 - Performance objectives for the geologic repository after permanent closure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Performance objectives for the geologic repository after...-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Objectives § 63.113 Performance objectives for the geologic repository after permanent...
ERIC Educational Resources Information Center
King, Melanie; Loddington, Steve; Manuel, Sue; Oppenheim, Charles
2008-01-01
The last couple of years have brought a rise in the number of institutional repositories throughout the world and within UK Higher Education institutions, with the majority of these repositories being devoted to research output. Repositories containing teaching and learning material are less common and the workflows and business processes…
OER Use in Intermediate Language Instruction: A Case Study
ERIC Educational Resources Information Center
Godwin-Jones, Robert
2017-01-01
This paper reports on a case study in the experimental use of Open Educational Resources (OERs) in intermediate level language instruction. The resources come from three sources: the instructor, the students, and open content repositories. The objective of this action research project was to provide student-centered learning materials, enhance…
ERIC Educational Resources Information Center
Bates, Melanie; Loddington, Steve; Manuel, Sue; Oppenheim, Charles
2007-01-01
In the United Kingdom over the past few years there has been a dramatic growth of national and regional repositories to collect and disseminate resources related to teaching and learning. Most notable of these are the Joint Information Systems Committee's Online Repository for [Learning and Teaching] Materials as well as the Higher Education…
Digital Repositories and the Question of Data Usefulness
NASA Astrophysics Data System (ADS)
Hughes, J. S.; Downs, R. R.
2017-12-01
The advent of ISO standards for trustworthy long-term digital repositories provides both a set of principles to develop long-term data repositories and the instruments to assess them for trustworthiness. Such mandatory high-level requirements are broad enough to be achievable, to some extent, by many scientific data centers, archives, and other repositories. But the requirement that the data be useful in the future, the requirement that is usually considered to be most relevant to the value of the repository for its user communities, largely remains subject to various interpretations and misunderstanding. However, current and future users will be relying on repositories to preserve and disseminate the data and information needed to discover, understand, and utilize these resources to support their research, learning, and decision-making objectives. Therefore, further study is needed to determine the approaches that can be adopted by repositories to make data useful to future communities of users. This presentation will describe approaches for enabling scientific data and related information, such as software, to be useful for current and potential future user communities and will present the methodology chosen to make one science discipline's data useful for both current and future users. The method uses an ontology-based information model to define and capture the information necessary to make the data useful for contemporary and future users.
Code of Federal Regulations, 2010 CFR
2010-01-01
... repository after permanent closure. 60.112 Section 60.112 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES Technical Criteria Performance Objectives § 60.112 Overall system performance objective for the geologic repository after permanent closure...
A "Simple Query Interface" Adapter for the Discovery and Exchange of Learning Resources
ERIC Educational Resources Information Center
Massart, David
2006-01-01
Developed as part of CEN/ISSS Workshop on Learning Technology efforts to improve interoperability between learning resource repositories, the Simple Query Interface (SQI) is an Application Program Interface (API) for querying heterogeneous repositories of learning resource metadata. In the context of the ProLearn Network of Excellence, SQI is used…
A Distributed Multi-Agent System for Collaborative Information Management and Learning
NASA Technical Reports Server (NTRS)
Chen, James R.; Wolfe, Shawn R.; Wragg, Stephen D.; Koga, Dennis (Technical Monitor)
2000-01-01
In this paper, we present DIAMS, a system of distributed, collaborative agents to help users access, manage, share and exchange information. A DIAMS personal agent helps its owner find information most relevant to current needs. It provides tools and utilities for users to manage their information repositories with dynamic organization and virtual views. Flexible hierarchical display is integrated with indexed query search-to support effective information access. Automatic indexing methods are employed to support user queries and communication between agents. Contents of a repository are kept in object-oriented storage to facilitate information sharing. Collaboration between users is aided by easy sharing utilities as well as automated information exchange. Matchmaker agents are designed to establish connections between users with similar interests and expertise. DIAMS agents provide needed services for users to share and learn information from one another on the World Wide Web.
OLIVER: an online library of images for veterinary education and research.
McGreevy, Paul; Shaw, Tim; Burn, Daniel; Miller, Nick
2007-01-01
As part of a strategic move by the University of Sydney toward increased flexibility in learning, the Faculty of Veterinary Science undertook a number of developments involving Web-based teaching and assessment. OLIVER underpins them by providing a rich, durable repository for learning objects. To integrate Web-based learning, case studies, and didactic presentations for veterinary and animal science students, we established an online library of images and other learning objects for use by academics in the Faculties of Veterinary Science and Agriculture. The objectives of OLIVER were to maximize the use of the faculty's teaching resources by providing a stable archiving facility for graphic images and other multimedia learning objects that allows flexible and precise searching, integrating indexing standards, thesauri, pull-down lists of preferred terms, and linking of objects within cases. OLIVER offers a portable and expandable Web-based shell that facilitates ongoing storage of learning objects in a range of media. Learning objects can be downloaded in common, standardized formats so that they can be easily imported for use in a range of applications, including Microsoft PowerPoint, WebCT, and Microsoft Word. OLIVER now contains more than 9,000 images relating to many facets of veterinary science; these are annotated and supported by search engines that allow rapid access to both images and relevant information. The Web site is easily updated and adapted as required.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Performance objectives for the geologic repository... (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA... repository operations area through permanent closure. (a) Protection against radiation exposures and releases...
Microsoft Repository Version 2 and the Open Information Model.
ERIC Educational Resources Information Center
Bernstein, Philip A.; Bergstraesser, Thomas; Carlson, Jason; Pal, Shankar; Sanders, Paul; Shutt, David
1999-01-01
Describes the programming interface and implementation of the repository engine and the Open Information Model for Microsoft Repository, an object-oriented meta-data management facility that ships in Microsoft Visual Studio and Microsoft SQL Server. Discusses Microsoft's component object model, object manipulation, queries, and information…
ERIC Educational Resources Information Center
Issack, Santally Mohammad
2011-01-01
Over the recent years, there has been a growing interest in Open Educational Resources (OER). A similar trend was observed about a decade ago in the concept of Learning Objects, which inevitably faded without really making an impact in real-world educational contexts. A number of repositories were created that contain thousands of learning…
Learning the Language of Healthcare Enabling Semantic Web Technology in CHCS
2013-09-01
tuples”, (subject, predicate, object), to relate data and achieve semantic interoperability . Other similar technologies exist, but their... Semantic Healthcare repository [5]. Ultimately, both of our data approaches were successful. However, our current test system is based on the CPRS demo...to extract system dependencies and workflows; to extract semantically related patient data ; and to browse patient- centric views into the system . We
The Use of Digital Repositories for Enhancing Teacher Pedagogical Performance
ERIC Educational Resources Information Center
Cohen, Anat; Kalimi, Sharon; Nachmias, Rafi
2013-01-01
This research examines the usage of local learning material repositories at school, as well as related teachers' attitudes and training. The study investigates the use of these repositories for enhancing teacher performance and assesses whether the assimilation of the local repositories increases their usage of and contribution to by teachers. One…
Kononowicz, Andrzej A; Zary, Nabil; Davies, David; Heid, Jörn; Woodham, Luke; Hege, Inga
2011-01-01
Patient consents for distribution of multimedia constitute a significant element of medical case-based repositories in medicine. A technical challenge is posed by the right of patients to withdraw permission to disseminate their images or videos. A technical mechanism for spreading information about changes in multimedia usage licenses is sought. The authors gained their experience by developing and managing a large (>340 cases) repository of virtual patients within the European project eViP. The solution for dissemination of license status should reuse and extend existing metadata standards in medical education. Two methods: PUSH and PULL are described differing in the moment of update and the division of responsibilities between parties in the learning object exchange process. The authors recommend usage of the PUSH scenario because it is better adapted to legal requirements in many countries. It needs to be stressed that the solution is based on mutual trust of the exchange partners and therefore is most appropriate for use in educational alliances and consortia. It is hoped that the proposed models for exchanging consents and licensing information will become a crucial part of the technical frameworks for building case-based repositories.
ERIC Educational Resources Information Center
Hwu, Fenfang
2013-01-01
Using script-based tracking to gain insights into the way students learn or process language information can be traced as far back as to the 1980s. Nevertheless, researchers continue to face challenges in collecting and studying this type of data. The objective of this study is to propose data sharing through data repositories as a way to (a) ease…
Models for Evaluating and Improving Architecture Competence
2008-03-01
learned better methods than it engaged in the past. 36 | CMU/SEI-2008-TR-006 SOFTWARE ENGINEERING INSTITUTE | 37 6 Considering the Models ...and groups must have a repository of ac- cumulated knowledge and experience. The Organizational Learning model provides a way to eva- luate how...effective that repository is. It also tells us how ―mindful‖ the learning needs to be. The organizational coordination model
Collaborative Recommendation of E-Learning Resources: An Experimental Investigation
ERIC Educational Resources Information Center
Manouselis, N.; Vuorikari, R.; Van Assche, F.
2010-01-01
Repositories with educational resources can support the formation of online learning communities by providing a platform for collaboration. Users (e.g. teachers, tutors and learners) access repositories, search for interesting resources to access and use, and in many cases, also exchange experiences and opinions. A particular class of online…
ALES: An Innovative Argument-Learning Environment
ERIC Educational Resources Information Center
Abbas, Safia; Sawamura, Hajime
2010-01-01
This paper presents the development of an Argument-Learning System (ALES). The idea is based on the AIF (argumentation interchange format) ontology using "Walton theory". ALES uses different mining techniques to manage a highly structured arguments repository. This repository was designed, developed and implemented by the authors. The aim is to…
Development of anomaly detection models for deep subsurface monitoring
NASA Astrophysics Data System (ADS)
Sun, A. Y.
2017-12-01
Deep subsurface repositories are used for waste disposal and carbon sequestration. Monitoring deep subsurface repositories for potential anomalies is challenging, not only because the number of sensor networks and the quality of data are often limited, but also because of the lack of labeled data needed to train and validate machine learning (ML) algorithms. Although physical simulation models may be applied to predict anomalies (or the system's nominal state for that sake), the accuracy of such predictions may be limited by inherent conceptual and parameter uncertainties. The main objective of this study was to demonstrate the potential of data-driven models for leakage detection in carbon sequestration repositories. Monitoring data collected during an artificial CO2 release test at a carbon sequestration repository were used, which include both scalar time series (pressure) and vector time series (distributed temperature sensing). For each type of data, separate online anomaly detection algorithms were developed using the baseline experiment data (no leak) and then tested on the leak experiment data. Performance of a number of different online algorithms was compared. Results show the importance of including contextual information in the dataset to mitigate the impact of reservoir noise and reduce false positive rate. The developed algorithms were integrated into a generic Web-based platform for real-time anomaly detection.
The Community as a Source of Pragmatic Input for Learners of Italian: The Multimedia Repository LIRA
ERIC Educational Resources Information Center
Zanoni, Greta
2016-01-01
This paper focuses on community participation within the LIRA project--Lingua/Cultura Italiana in Rete per l'Apprendimento (Italian language and culture for online learning). LIRA is a multimedia repository of e-learning materials aiming at recovering, preserving and developing the linguistic, pragmatic and cultural competences of second and third…
A Scientific Workflow Platform for Generic and Scalable Object Recognition on Medical Images
NASA Astrophysics Data System (ADS)
Möller, Manuel; Tuot, Christopher; Sintek, Michael
In the research project THESEUS MEDICO we aim at a system combining medical image information with semantic background knowledge from ontologies to give clinicians fully cross-modal access to biomedical image repositories. Therefore joint efforts have to be made in more than one dimension: Object detection processes have to be specified in which an abstraction is performed starting from low-level image features across landmark detection utilizing abstract domain knowledge up to high-level object recognition. We propose a system based on a client-server extension of the scientific workflow platform Kepler that assists the collaboration of medical experts and computer scientists during development and parameter learning.
10 CFR 60.134 - Design of seals for shafts and boreholes.
Code of Federal Regulations, 2010 CFR
2010-01-01
... GEOLOGIC REPOSITORIES Technical Criteria Design Criteria for the Geologic Repository Operations Area § 60... the geologic repository's ability to meet the performance objectives or the period following permanent...
10 CFR 60.111 - Performance of the geologic repository operations area through permanent closure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Performance of the geologic repository operations area... OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES Technical Criteria Performance Objectives § 60.111 Performance of the geologic repository operations area through permanent closure. (a...
Object links in the repository
NASA Technical Reports Server (NTRS)
Beck, Jon; Eichmann, David
1991-01-01
Some of the architectural ramifications of extending the Eichmann/Atkins lattice-based classification scheme to encompass the assets of the full life-cycle of software development are explored. In particular, we wish to consider a model which provides explicit links between objects in addition to the edges connecting classification vertices in the standard lattice. The model we consider uses object-oriented terminology. Thus, the lattice is viewed as a data structure which contains class objects which exhibit inheritance. A description of the types of objects in the repository is presented, followed by a discussion of how they interrelate. We discuss features of the object-oriented model which support these objects and their links, and consider behavior which an implementation of the model should exhibit. Finally, we indicate some thoughts on implementing a prototype of this repository architecture.
Virtual patient repositories--a comparative analysis.
Küfner, Julia; Kononowicz, Andrzej A; Hege, Inga
2014-01-01
Virtual Patients (VPs) are an important component of medical education. One way to reduce the costs for creating VPs is sharing through repositories. We conducted a literature review to identify existing repositories and analyzed the 17 included repositories in regards to the search functions and metadata they provide. Most repositories provided some metadata such as title or description, whereas other data, such as educational objectives, were less frequent. Future research could, in cooperation with the repository provider, investigate user expectations and usage patterns.
Active Exploration of Large 3D Model Repositories.
Gao, Lin; Cao, Yan-Pei; Lai, Yu-Kun; Huang, Hao-Zhi; Kobbelt, Leif; Hu, Shi-Min
2015-12-01
With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as "like" or "dislike" such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100 K models.
ERIC Educational Resources Information Center
Association of Research Libraries, 2009
2009-01-01
Libraries are making diverse contributions to the development of many types of digital repositories, particularly those housing locally created digital content, including new digital objects or digitized versions of locally held works. In some instances, libraries are managing a repository and its related services entirely on their own, but often…
Health professional learner attitudes and use of digital learning resources.
Maloney, Stephen; Chamberlain, Michael; Morrison, Shane; Kotsanas, George; Keating, Jennifer L; Ilic, Dragan
2013-01-16
Web-based digital repositories allow educational resources to be accessed efficiently and conveniently from diverse geographic locations, hold a variety of resource formats, enable interactive learning, and facilitate targeted access for the user. Unlike some other learning management systems (LMS), resources can be retrieved through search engines and meta-tagged labels, and content can be streamed, which is particularly useful for multimedia resources. The aim of this study was to examine usage and user experiences of an online learning repository (Physeek) in a population of physiotherapy students. The secondary aim of this project was to examine how students prefer to access resources and which resources they find most helpful. The following data were examined using an audit of the repository server: (1) number of online resources accessed per day in 2010, (2) number of each type of resource accessed, (3) number of resources accessed during business hours (9 am to 5 pm) and outside business hours (years 1-4), (4) session length of each log-on (years 1-4), and (5) video quality (bit rate) of each video accessed. An online questionnaire and 3 focus groups assessed student feedback and self-reported experiences of Physeek. Students preferred the support provided by Physeek to other sources of educational material primarily because of its efficiency. Peak usage commonly occurred at times of increased academic need (ie, examination times). Students perceived online repositories as a potential tool to support lifelong learning and health care delivery. The results of this study indicate that today's health professional students welcome the benefits of online learning resources because of their convenience and usability. This represents a transition away from traditional learning styles and toward technological learning support and may indicate a growing link between social immersions in Internet-based connections and learning styles. The true potential for Web-based resources to support student learning is as yet unknown.
NCI Expands Repository of Cancer Research Models
NCI is expanding its Patient-Derived Models Repository (PDMR), which generates and distributes models like patient-derived xenografts and organoids. In this Cancer Currents Q&A with Drs. Yvonne Evrard and James Doroshow, learn how the expansion can help cancer researchers make more rapid progress.
10 CFR 63.113 - Performance objectives for the geologic repository after permanent closure.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Performance objectives for the geologic repository after permanent closure. 63.113 Section 63.113 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH... and an engineered barrier system. (b) The engineered barrier system must be designed so that, working...
10 CFR 63.113 - Performance objectives for the geologic repository after permanent closure.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Performance objectives for the geologic repository after permanent closure. 63.113 Section 63.113 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH... and an engineered barrier system. (b) The engineered barrier system must be designed so that, working...
10 CFR 63.113 - Performance objectives for the geologic repository after permanent closure.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Performance objectives for the geologic repository after permanent closure. 63.113 Section 63.113 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH... and an engineered barrier system. (b) The engineered barrier system must be designed so that, working...
10 CFR 63.113 - Performance objectives for the geologic repository after permanent closure.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Performance objectives for the geologic repository after permanent closure. 63.113 Section 63.113 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH... and an engineered barrier system. (b) The engineered barrier system must be designed so that, working...
Using OAI-PMH and METS for Exporting Metadata and Digital Objects between Repositories
ERIC Educational Resources Information Center
Bell, Jonathan; Lewis, Stuart
2006-01-01
Purpose: To examine the relationship between deposit of electronic theses in institutional and archival repositories. Specifically the paper considers the automated export of theses for deposit in the archival repository in continuation of the existing arrangement in Wales for paper-based theses. Design/methodology/approach: The paper presents a…
Space Telecommunications Radio System (STRS) Application Repository Design and Analysis
NASA Technical Reports Server (NTRS)
Handler, Louis M.
2013-01-01
The Space Telecommunications Radio System (STRS) Application Repository Design and Analysis document describes the STRS application repository for software-defined radio (SDR) applications intended to be compliant to the STRS Architecture Standard. The document provides information about the submission of artifacts to the STRS application repository, to provide information to the potential users of that information, and for the systems engineer to understand the requirements, concepts, and approach to the STRS application repository. The STRS application repository is intended to capture knowledge, documents, and other artifacts for each waveform application or other application outside of its project so that when the project ends, the knowledge is retained. The document describes the transmission of technology from mission to mission capturing lessons learned that are used for continuous improvement across projects and supporting NASA Procedural Requirements (NPRs) for performing software engineering projects and NASAs release process.
Preservation of Earth Science Data History with Digital Content Repository Technology
NASA Astrophysics Data System (ADS)
Wei, Y.; Pan, J.; Shrestha, B.; Cook, R. B.
2011-12-01
An increasing need for derived and on-demand data product in Earth Science research makes the digital content more difficult for providers to manage and preserve and for users to locate, understand, and consume. Specifically, this increasing need presents additional challenges in managing data processing history information and delivering such information to end users. For example, the North American Carbon Program (NACP) Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP) chose a modified SYNMAP land cover data as one of the input driver data for participating terrestrial biospheric models. The global 1km resolution SYNMAP data was created by harmonizing 3 remote sensing-based land cover products: GLCC, GLC2000, and the MODIS land cover product. The original SYNMAP land cover data was aggregated into half and quarter degree resolution. It was then enhanced with more detailed grassland and cropland types. Currently, there lacks an effective mechanism to convey this data processing information to different modeling teams for them to determine if a data product meets their needs. It still highly relies on offline human interaction. The NASA-sponsored ORNL DAAC has leveraged the contemporary digital object repository technology to promote the representation, management, and delivery of data processing history and provenance information. Within digital object repository, different data products are managed as objects, with metadata as attributes and content delivery and management services as dissemination methods. Derivation relationships among data products can be semantically referenced between digital objects. Within the repository, data users can easily track a derived data product back to its origin, explorer metadata and documents about each intermediate data product, and discover processing details involved in each derivation step. Coupled with Drupal Web Content Management System, the digital repository interface was enhanced to provide intuitive graphic representation of the data processing history. Each data product is also associated with a formal metadata record in FGDC standards, and the main fields of the FGDC record are indexed for search, and are displayed as attributes of the data product. These features enable data users to better understand and consume a data product. The representation of data processing history in digital repository can further promote long-term data preservation. Lineage information is a major aspect to make digital data understandable and usable long time into the future. Derivation references can be setup between digital objects not only within a single digital repository, but also across multiple distributed digital repositories. Along with emerging identification mechanisms, such as Digital Object Identifier (DOI), a flexible distributed digital repository network can be setup to better preserve digital content. In this presentation, we describe how digital content repository technology can be used to manage, preserve, and deliver digital data processing history information in Earth Science research domain, with selected data archived in ORNL DAAC and Model and Synthesis Thematic Data Center (MAST-DC) as testing targets.
Marine Corps Warfighting Laboratory Home
of Learning Information System (MCCOLIS) A collaborative, knowledge management system that contains Concept development * Warfighting Challenge Repository that supports the Campaign of Learning and Future
Case-Based Reasoning in Mixed Paradigm Settings and with Learning
1994-04-30
Learning Prototypical Cases OFF-BROADWAY, MCI and RMHC -* are three CBR-ML systems that learn case prototypes. We feel that methods that enable the...at Irvine Machine Learning Repository, including heart disease and breast cancer databases. OFF-BROADWAY, MCI and RMHC -* made the following notable
A Remote Knowledge Repository System for Teaching and Learning.
ERIC Educational Resources Information Center
Martins, Protasio D.; Maidantchik, Carmen; Lemos, Leandro T.; Manoel de Seixas, Jose
Changes in the global economy and the extensive use of the internet implied a conceptual redefinition of the working and social structure, and consequently an enhancement of educational systems that instruct engineers. This paper presents a repository of remote multimedia information such as formatted or non-formatted documents, hypertext pages,…
Datasets2Tools, repository and search engine for bioinformatics datasets, tools and canned analyses
Torre, Denis; Krawczuk, Patrycja; Jagodnik, Kathleen M.; Lachmann, Alexander; Wang, Zichen; Wang, Lily; Kuleshov, Maxim V.; Ma’ayan, Avi
2018-01-01
Biomedical data repositories such as the Gene Expression Omnibus (GEO) enable the search and discovery of relevant biomedical digital data objects. Similarly, resources such as OMICtools, index bioinformatics tools that can extract knowledge from these digital data objects. However, systematic access to pre-generated ‘canned’ analyses applied by bioinformatics tools to biomedical digital data objects is currently not available. Datasets2Tools is a repository indexing 31,473 canned bioinformatics analyses applied to 6,431 datasets. The Datasets2Tools repository also contains the indexing of 4,901 published bioinformatics software tools, and all the analyzed datasets. Datasets2Tools enables users to rapidly find datasets, tools, and canned analyses through an intuitive web interface, a Google Chrome extension, and an API. Furthermore, Datasets2Tools provides a platform for contributing canned analyses, datasets, and tools, as well as evaluating these digital objects according to their compliance with the findable, accessible, interoperable, and reusable (FAIR) principles. By incorporating community engagement, Datasets2Tools promotes sharing of digital resources to stimulate the extraction of knowledge from biomedical research data. Datasets2Tools is freely available from: http://amp.pharm.mssm.edu/datasets2tools. PMID:29485625
Datasets2Tools, repository and search engine for bioinformatics datasets, tools and canned analyses.
Torre, Denis; Krawczuk, Patrycja; Jagodnik, Kathleen M; Lachmann, Alexander; Wang, Zichen; Wang, Lily; Kuleshov, Maxim V; Ma'ayan, Avi
2018-02-27
Biomedical data repositories such as the Gene Expression Omnibus (GEO) enable the search and discovery of relevant biomedical digital data objects. Similarly, resources such as OMICtools, index bioinformatics tools that can extract knowledge from these digital data objects. However, systematic access to pre-generated 'canned' analyses applied by bioinformatics tools to biomedical digital data objects is currently not available. Datasets2Tools is a repository indexing 31,473 canned bioinformatics analyses applied to 6,431 datasets. The Datasets2Tools repository also contains the indexing of 4,901 published bioinformatics software tools, and all the analyzed datasets. Datasets2Tools enables users to rapidly find datasets, tools, and canned analyses through an intuitive web interface, a Google Chrome extension, and an API. Furthermore, Datasets2Tools provides a platform for contributing canned analyses, datasets, and tools, as well as evaluating these digital objects according to their compliance with the findable, accessible, interoperable, and reusable (FAIR) principles. By incorporating community engagement, Datasets2Tools promotes sharing of digital resources to stimulate the extraction of knowledge from biomedical research data. Datasets2Tools is freely available from: http://amp.pharm.mssm.edu/datasets2tools.
A collaborative framework for Distributed Privacy-Preserving Support Vector Machine learning.
Que, Jialan; Jiang, Xiaoqian; Ohno-Machado, Lucila
2012-01-01
A Support Vector Machine (SVM) is a popular tool for decision support. The traditional way to build an SVM model is to estimate parameters based on a centralized repository of data. However, in the field of biomedicine, patient data are sometimes stored in local repositories or institutions where they were collected, and may not be easily shared due to privacy concerns. This creates a substantial barrier for researchers to effectively learn from the distributed data using machine learning tools like SVMs. To overcome this difficulty and promote efficient information exchange without sharing sensitive raw data, we developed a Distributed Privacy Preserving Support Vector Machine (DPP-SVM). The DPP-SVM enables privacy-preserving collaborative learning, in which a trusted server integrates "privacy-insensitive" intermediary results. The globally learned model is guaranteed to be exactly the same as learned from combined data. We also provide a free web-service (http://privacy.ucsd.edu:8080/ppsvm/) for multiple participants to collaborate and complete the SVM-learning task in an efficient and privacy-preserving manner.
Pathology Competencies for Medical Education and Educational Cases.
Knollmann-Ritschel, Barbara E C; Regula, Donald P; Borowitz, Michael J; Conran, Richard; Prystowsky, Michael B
2017-01-01
Current medical school curricula predominantly facilitate early integration of basic science principles into clinical practice to strengthen diagnostic skills and the ability to make treatment decisions. In addition, they promote life-long learning and understanding of the principles of medical practice. The Pathology Competencies for Medical Education (PCME) were developed in response to a call to action by pathology course directors nationwide to teach medical students pathology principles necessary for the practice of medicine. The PCME are divided into three competencies: 1) Disease Mechanisms and Processes, 2) Organ System Pathology, and 3) Diagnostic Medicine and Therapeutic Pathology. Each of these competencies is broad and contains multiple learning goals with more specific learning objectives. The original competencies were designed to be a living document, meaning that they will be revised and updated periodically, and have undergone their first revision with this publication. The development of teaching cases, which have a classic case-based design, for the learning objectives is the next step in providing educational content that is peer-reviewed and readily accessible for pathology course directors, medical educators, and medical students. Application of the PCME and cases promotes a minimum standard of exposure of the undifferentiated medical student to pathophysiologic principles. The publication of the PCME and the educational cases will create a current educational resource and repository published through Academic Pathology .
ERIC Educational Resources Information Center
McArdle, Gavin; Bertolotto, Michela
2012-01-01
Today, the Internet plays a major role in distributing learning material within third level education. Multiple online facilities provide access to educational resources. While early systems relied on webpages, which acted as repositories for learning material, nowadays sophisticated online applications manage and deliver learning resources.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. Keister; K, McBride
The Nuclear Waste Policy Act of 1982 (NWPA), as amended, assigned the Department of Energy (DOE) responsibility for developing and managing a Federal system for the disposal of spent nuclear fuel (SNF) and high-level radioactive waste (HLW). The Office of Civilian Radioactive Waste Management (OCRWM) is responsible for accepting, transporting, and disposing of SNF and HLW at the Yucca Mountain repository (if licensed) in a manner that protects public health, safety, and the environment; enhances national and energy security; and merits public confidence. OCRWM faces a near-term challenge--to develop and demonstrate a transportation system that will sustain safe and efficientmore » shipments of SNF and HLW to a repository. To better inform and improve its current planning, OCRWM has extensively reviewed plans and other documents related to past high-visibility shipping campaigns of SNF and other radioactive materials within the United States. This report summarizes the results of this review and, where appropriate, lessons learned. The objective of this lessons learned study was to identify successful, best-in-class trends and commonalities from past shipping campaigns, which OCRWM could consider when planning for the development and operation of a repository transportation system. Note: this paper is for analytical and discussion purposes only, and is not an endorsement of, or commitment by, OCRWM to follow any of the comments or trends. If OCRWM elects to make such commitments at a future time, they will be appropriately documented in formal programmatic policy statements, plans and procedures. Reviewers examined an extensive study completed in 2003 by DOE's National Transportation Program (NTP), Office of Environmental Management (EM), as well as plans and documents related to SNF shipments since issuance of the NTP report. OCRWM examined specific planning, business, institutional and operating practices that have been identified by DOE, its transportation contractors, and stakeholders as important issues that arise repeatedly. In addition, the review identifies lessons learned or activities/actions which were found not to be productive to the planning and conduct of SNF shipments (i.e., negative impacts). This paper is a 'looking back' summary of lessons learned across multiple transportation campaigns. Not all lessons learned are captured here, and participants in some of the campaigns have divergent opinions and perspectives about which lessons are most critical. This analysis is part of a larger OCRWM benchmarking effort to identify best practices to consider in future transportation of radioactive materials ('looking forward'). Initial findings from this comprehensive benchmarking analysis are expected to be available in late fall 2006.« less
Questions of Quality in Repositories of Open Educational Resources: A Literature Review
ERIC Educational Resources Information Center
Atenas, Javiera; Havemann, Leo
2014-01-01
Open educational resources (OER) are teaching and learning materials which are freely available and openly licensed. Repositories of OER (ROER) are platforms that host and facilitate access to these resources. ROER should not just be designed to store this content--in keeping with the aims of the OER movement, they should support educators in…
ERIC Educational Resources Information Center
Procter, Richard
2007-01-01
This paper describes how the Teaching and Learning Research Programme (TLRP) has implemented and applied DSpace as a digital repository for project and programme outputs, including published articles, conference papers, research reports, briefings and press releases. The DSpace repository has become a major element in the user engagement strategy…
ERIC Educational Resources Information Center
Zervas, Panagiotis; Sampson, Demetrios G.
2014-01-01
Mobile assisted language learning (MALL) and open access repositories for language learning resources are both topics that have attracted the interest of researchers and practitioners in technology enhanced learning (TeL). Yet, there is limited experimental evidence about possible factors that can influence and potentially enhance reuse of MALL…
Linking Big and Small Data Across the Social, Engineering, and Earth Sciences
NASA Astrophysics Data System (ADS)
Chen, R. S.; de Sherbinin, A. M.; Levy, M. A.; Downs, R. R.
2014-12-01
The challenges of sustainable development cut across the social, health, ecological, engineering, and Earth sciences, across a wide range of spatial and temporal scales, and across the spectrum from basic to applied research and decision making. The rapidly increasing availability of data and information in digital form from a variety of data repositories, networks, and other sources provides new opportunities to link and integrate both traditional data holdings as well as emerging "big data" resources in ways that enable interdisciplinary research and facilitate the use of objective scientific data and information in society. Taking advantage of these opportunities not only requires improved technical and scientific data interoperability across disciplines, scales, and data types, but also concerted efforts to bridge gaps and barriers between key communities, institutions, and networks. Given the long time perspectives required in planning sustainable approaches to development, it is also imperative to address user requirements for long-term data continuity and stewardship by trustworthy repositories. We report here on lessons learned by CIESIN working on a range of sustainable development issues to integrate data across multiple repositories and networks. This includes CIESIN's roles in developing policy-relevant climate and environmental indicators, soil data for African agriculture, and exposure and risk measures for hazards, disease, and conflict, as well as CIESIN's participation in a range of national and international initiatives related both to sustainable development and to open data access, interoperability, and stewardship.
Nicephor[e]: a web-based solution for teaching forensic and scientific photography.
Voisard, R; Champod, C; Furrer, J; Curchod, J; Vautier, A; Massonnet, G; Buzzini, P
2007-04-11
Nicephor[e] is a project funded by "Swiss Virtual Campus" and aims at creating a distant or mixed web-based learning system in forensic and scientific photography and microscopy. The practical goal is to organize series of on-line modular courses corresponding to the educational requirements of undergraduate academic programs. Additionally, this program could be used in the context of continuing educational programs. The architecture of the project is designed to guarantee a high level of knowledge in forensic and scientific photographic techniques, and to have an easy content production and the ability to create a number of different courses sharing the same content. The e-learning system Nicephor[e] consists of three different parts. The first one is a repository of learning objects that gathers all theoretical subject matter of the project such as texts, animations, images, and films. This repository is a web content management system (Typo3) that permits creating, publishing, and administrating dynamic content via a web browser as well as storing it into a database. The flexibility of the system's architecture allows for an easy updating of the content to follow the development of photographic technology. The instructor of a course can decide which modular contents need to be included in the course, and in which order they will be accessed by students. All the modular courses are developed in a learning management system (WebCT or Moodle) that can deal with complex learning scenarios, content distribution, students, tests, and interaction with instructor. Each course has its own learning scenario based on the goals of the course and the student's profile. The content of each course is taken from the content management system. It is then structured in the learning management system according to the pedagogical goals defined by the instructor. The modular courses are created in a highly interactive setting and offer autoevaluating tests to the students. The last part of the system is a digital assets management system (Extensis Portfolio). The practical portion of each course is to produce images of different marks or objects. The collection of all this material produced, indexed by the students and corrected by the instructor is essential to the development of a knowledge base of photographic techniques applied to a specific forensic subject. It represents also an extensible collection of different marks from known sources obtained under various conditions. It allows to reuse these images for creating image-based case files.
Shaping Solutions from Learnings in PAIs: A Blueprint
ERIC Educational Resources Information Center
Dosanjh, Nawtej; Jha, Pushkar P.
2016-01-01
Purpose: The paper outlines a portal that facilitates learning through sharing of experiences. This flow is between experience sharers and solution seekers in the domain of poverty alleviation interventions (PAIs). Practitioners working on PAIs are often confined to searching from within "lessons learned" repositories and also from…
M-Learning and Augmented Reality: A Review of the Scientific Literature on the WoS Repository
ERIC Educational Resources Information Center
Fombona, Javier; Pascual-Sevillano, Maria-Angeles; González-Videgara, MariCarmen
2017-01-01
Augmented reality emerges as a tool, on which it is necessary to examine its real educational value. This paper shows the results of a bibliometric analysis performed on documents collected from the Web of Science repository, an Internet service that concentrates bibliographic information from more than 7,000 institutions. Our analysis included an…
ERIC Educational Resources Information Center
Wong, Denis; Shephard, Kerry L.; Phillips, Peter
2008-01-01
This paper offers an insight into the development, use and governance of e-repositories for learning and teaching, illustrated by Eric Raymond's bazaar and cathedral analogies and by a comparison of collection strategies that focus on content coverage or on the needs of users. It addresses in particular the processes that encourage and achieve…
Researcher-library collaborations: Data repositories as a service for researchers.
Gordon, Andrew S; Millman, David S; Steiger, Lisa; Adolph, Karen E; Gilmore, Rick O
New interest has arisen in organizing, preserving, and sharing the raw materials-the data and metadata-that undergird the published products of research. Library and information scientists have valuable expertise to bring to bear in the effort to create larger, more diverse, and more widely used data repositories. However, for libraries to be maximally successful in providing the research data management and preservation services required of a successful data repository, librarians must work closely with researchers and learn about their data management workflows. Databrary is a data repository that is closely linked to the needs of a specific scholarly community-researchers who use video as a main source of data to study child development and learning. The project's success to date is a result of its focus on community outreach and providing services for scholarly communication, engaging institutional partners, offering services for data curation with the guidance of closely involved information professionals, and the creation of a strong technical infrastructure. Databrary plans to improve its curation tools that allow researchers to deposit their own data, enhance the user-facing feature set, increase integration with library systems, and implement strategies for long-term sustainability.
A Collaborative Framework for Distributed Privacy-Preserving Support Vector Machine Learning
Que, Jialan; Jiang, Xiaoqian; Ohno-Machado, Lucila
2012-01-01
A Support Vector Machine (SVM) is a popular tool for decision support. The traditional way to build an SVM model is to estimate parameters based on a centralized repository of data. However, in the field of biomedicine, patient data are sometimes stored in local repositories or institutions where they were collected, and may not be easily shared due to privacy concerns. This creates a substantial barrier for researchers to effectively learn from the distributed data using machine learning tools like SVMs. To overcome this difficulty and promote efficient information exchange without sharing sensitive raw data, we developed a Distributed Privacy Preserving Support Vector Machine (DPP-SVM). The DPP-SVM enables privacy-preserving collaborative learning, in which a trusted server integrates “privacy-insensitive” intermediary results. The globally learned model is guaranteed to be exactly the same as learned from combined data. We also provide a free web-service (http://privacy.ucsd.edu:8080/ppsvm/) for multiple participants to collaborate and complete the SVM-learning task in an efficient and privacy-preserving manner. PMID:23304414
Understanding Teacher Professional Learning through Cyber Research
ERIC Educational Resources Information Center
Bates, Meg S.; Phalen, Lena; Moran, Cheryl
2018-01-01
Online professional learning websites provide a unique window into how teachers make self-directed choices about their own professional development. This study extends previous research on how teachers use online resource repositories to examine how teachers make choices about resource use on a professional learning website. The website, the…
Extending the ARIADNE Web-Based Learning Environment.
ERIC Educational Resources Information Center
Van Durm, Rafael; Duval, Erik; Verhoeven, Bart; Cardinaels, Kris; Olivie, Henk
One of the central notions of the ARIADNE learning platform is a share-and-reuse approach toward the development of digital course material. The ARIADNE infrastructure includes a distributed database called the Knowledge Pool System (KPS), which acts as a repository of pedagogical material, described with standardized IEEE LTSC Learning Object…
JiFUNzeni: A Blended Learning Approach for Sustainable Teachers' Professional Development
ERIC Educational Resources Information Center
Onguko, Brown Bully
2014-01-01
JiFUNzeni blended learning approach is a sustainable approach to provision of professional development (PD) for those in challenging educational contexts. JiFUNzeni approach emphasizes training regional experts to create blended learning content, working with appropriate technology while building content repositories. JiFUNzeni approach was…
Disability-Aware Adaptive and Personalised Learning for Students with Multiple Disabilities
ERIC Educational Resources Information Center
Nganji, Julius T.; Brayshaw, Mike
2017-01-01
Purpose: The purpose of this paper is to address how virtual learning environments (VLEs) can be designed to include the needs of learners with multiple disabilities. Specifically, it employs AI to show how specific learning materials from a huge repository of learning materials can be recommended to learners with various disabilities. This is…
NASA Astrophysics Data System (ADS)
Lawhead, Pamela B.; Aten, Michelle L.
2003-04-01
The Center for GeoSpatial Workforce Development is embarking on a new era in education by developing a repository of dynamic online courseware authored by the foremost industry experts within the remote sensing and GIS industries. Virtual classrooms equipped with the most advanced instructions, computations, communications, course evaluation, and management facilities amplify these courses to enhance the learning environment and provide rapid feedback between instructors and students. The launch of this program included the objective development of the Model Curriculum by an independent consortium of remote sensing industry leaders. The Center's research and development focus on recruiting additional industry experts to develop the technical content of the courseware and then utilize state-of-the-art technology to enhance their material with visually stimulating animations, compelling audio clips and entertaining, interactive exercises intended to reach the broadest audience possible by targeting various learning styles. The courseware will be delivered via various media: Internet, CD-ROM, DVD, and compressed video, that translates into anywhere, anytime delivery of GeoSpatial Information Technology education.
A data library management system for midwest FreightView and its data repository.
DOT National Transportation Integrated Search
2011-03-01
Midwest FreightView (MWFV) and its associated data repository is part of a large multifaceted : effort to promote regional economic development throughout the Great Lakes : system. The main objective for the system is to promote sustainable maritime ...
Multi-institutional tumor banking: lessons learned from a pancreatic cancer biospecimen repository.
Demeure, Michael J; Sielaff, Timothy; Koep, Larry; Prinz, Richard; Moser, A James; Zeh, Herb; Hostetter, Galen; Black, Jodi; Decker, Ardis; Rosewell, Sandra; Bussey, Kimberly J; Von Hoff, Daniel
2010-10-01
Clinically annotated pancreatic cancer samples are needed for progress to be made toward developing more effective treatments for this deadly cancer. As part of a National Cancer Institute-funded program project, we established a biospecimen core to support the research efforts. This article summarizes the key hurdles encountered and solutions we found in the process of developing a successful multi-institution biospecimen repository.
The SeaView EarthCube project: Lessons Learned from Integrating Across Repositories
NASA Astrophysics Data System (ADS)
Diggs, S. C.; Stocks, K. I.; Arko, R. A.; Kinkade, D.; Shepherd, A.; Olson, C. J.; Pham, A.
2017-12-01
SeaView is an NSF-funded EarthCube Integrative Activity Project working with 5 existing data repositories* to provide oceanographers with highly integrated thematic data collections in user-requested formats. The project has three complementary goals: Supporting Scientists: SeaView targets scientists' need for easy access to data of interest that are ready to import into their preferred tool. Strengthening Repositories: By integrating data from multiple repositories for science use, SeaView is helping the ocean data repositories align their data and processes and make ocean data more accessible and easily integrated. Informing EarthCube (earthcube.org): SeaView's experience as an integration demonstration can inform the larger NSF EarthCube architecture and design effort. The challenges faced in this small-scale effort are informative to geosciences cyberinfrastructure more generally. Here we focus on the lessons learned that may inform other data facilities and integrative architecture projects. (The SeaView data collections will be presented at the Ocean Sciences 2018 meeting.) One example is the importance of shared semantics, with persistent identifiers, for key integration elements across the data sets (e.g. cruise, parameter, and project/program.) These must allow for revision through time and should have an agreed authority or process for resolving conflicts: aligning identifiers and correcting errors were time consuming and often required both deep domain knowledge and "back end" knowledge of the data facilities. Another example is the need for robust provenance, and tools that support automated or semi-automated data transform pipelines that capture provenance. Multiple copies and versions of data are now flowing into repositories, and onward to long-term archives such as NOAA NCEI and umbrella portals such as DataONE. Exact copies can be identified with hashes (for those that have the skills), but it can be painfully difficult to understand the processing or format changes that differentiates versions. As more sensors are deployed, and data re-use increases, this will only become more challenging. We will discuss these, and additional lessons learned, as well as invite discussion and solutions from others doing similar work. * BCO-DMO, CCHDO, OBIS, OOI, R2R
[The subject repositories of strategy of the Open Access initiative].
Soares Guimarães, M C; da Silva, C H; Horsth Noronha, I
2012-11-01
The subject repositories are defined as a set of digital objects resulting from the research related to a specific disciplinary field and occupy a still restricted space in the discussion agenda of the Free Access Movement when compared to amplitude reached in the discussion of Institutional Repositories. Although the Subject Repository comes to prominence in the field, especially for the success of initiatives such as the arXiv, PubMed and E-prints, the literature on the subject is recognized as very limited. Despite its roots in the Library and Information Science, and focus on the management of disciplinary collections (subject area literature), there is little information available about the development and management of subject repositories. The following text seeks to make a brief summary on the topic as a way to present the potential to develop subject repositories in order to strengthen the initiative of open access.
A metadata-driven approach to data repository design.
Harvey, Matthew J; McLean, Andrew; Rzepa, Henry S
2017-01-01
The design and use of a metadata-driven data repository for research data management is described. Metadata is collected automatically during the submission process whenever possible and is registered with DataCite in accordance with their current metadata schema, in exchange for a persistent digital object identifier. Two examples of data preview are illustrated, including the demonstration of a method for integration with commercial software that confers rich domain-specific data analytics without introducing customisation into the repository itself.
A Proposal to Enhance the Use of Learning Platforms in Higher Education
ERIC Educational Resources Information Center
Marques, Bertil P.; Villate, Jaime E.; Vaz de Carvalho, Carlos
2015-01-01
The results of several studies conducted to analyze the quantitative and qualitative use of learning technologies in Higher Education in Portugal showed that, in general, these technologies are not used systematically and effectively and e-learning platforms tend to be relegated to repositories of contents rather than as full-fledged tools…
Building Corpus-Informed Word Lists for L2 Vocabulary Learning in Nine Languages
ERIC Educational Resources Information Center
Charalabopoulou, Frieda; Gavrilidou, Maria; Kokkinakis, Sofie Johansson; Volodina, Elena
2012-01-01
Lexical competence constitutes a crucial aspect in L2 learning, since building a rich repository of words is considered indispensable for successful communication. CALL practitioners have experimented with various kinds of computer-mediated glosses to facilitate L2 vocabulary building in the context of incidental vocabulary learning. Intentional…
Reuse, Repurposing and Learning Design--Lessons from the DART Project
ERIC Educational Resources Information Center
Bond, Stephen T.; Ingram, Caroline; Ryan, Steve
2008-01-01
Digital Anthropological Resources for Teaching (DART) is a major project examining ways in which the use of online learning activities and repositories can enhance the teaching of anthropology and, by extension, other disciplines. This paper reports on one strand of DART activity, the development of customisable learning activities that can be…
Integrating a Learning Management System with a Student Assignments Digital Repository. A Case Study
ERIC Educational Resources Information Center
Díaz, Javier; Schiavoni, Alejandra; Osorio, María Alejandra; Amadeo, Ana Paola; Charnelli, María Emilia
2013-01-01
The integration of different platforms and information Systems in the academic environment is highly important and quite a challenge within the field of Information Technology. This integration allows for higher resource availability and improved interaction among intervening actors. In the field of e-Learning, where Learning Management Systems…
The Electronic Studio and the Intranet: Network-Based Learning.
ERIC Educational Resources Information Center
Solis, Carlos R.
The Electronic Studio, developed by the Rice University (Texas) Center for Technology in Teaching and Learning (CTTL), serves a number of purposes related to the construction and development of learning projects. It is a workplace, a display area, and a repository for tools, data, multimedia, design projects, and personal papers. This paper…
ERIC Educational Resources Information Center
O'Neill, Edward T.; Lavoie, Brian F.; Bennett, Rick; Staples, Thornton; Wayland, Ross; Payette, Sandra; Dekkers, Makx; Weibel, Stuart; Searle, Sam; Thompson, Dave; Rudner, Lawrence M.
2003-01-01
Includes five articles that examine key trends in the development of the public Web: size and growth, internationalization, and metadata usage; Flexible Extensible Digital Object and Repository Architecture (Fedora) for use in digital libraries; developments in the Dublin Core Metadata Initiative (DCMI); the National Library of New Zealand Te Puna…
The impact of E-learning in medical education.
Ruiz, Jorge G; Mintzer, Michael J; Leipzig, Rosanne M
2006-03-01
The authors provide an introduction to e-learning and its role in medical education by outlining key terms, the components of e-learning, the evidence for its effectiveness, faculty development needs for implementation, evaluation strategies for e-learning and its technology, and how e-learning might be considered evidence of academic scholarship. E-learning is the use of Internet technologies to enhance knowledge and performance. E-learning technologies offer learners control over content, learning sequence, pace of learning, time, and often media, allowing them to tailor their experiences to meet their personal learning objectives. In diverse medical education contexts, e-learning appears to be at least as effective as traditional instructor-led methods such as lectures. Students do not see e-learning as replacing traditional instructor-led training but as a complement to it, forming part of a blended-learning strategy. A developing infrastructure to support e-learning within medical education includes repositories, or digital libraries, to manage access to e-learning materials, consensus on technical standardization, and methods for peer review of these resources. E-learning presents numerous research opportunities for faculty, along with continuing challenges for documenting scholarship. Innovations in e-learning technologies point toward a revolution in education, allowing learning to be individualized (adaptive learning), enhancing learners' interactions with others (collaborative learning), and transforming the role of the teacher. The integration of e-learning into medical education can catalyze the shift toward applying adult learning theory, where educators will no longer serve mainly as the distributors of content, but will become more involved as facilitators of learning and assessors of competency.
Counter-terrorism threat prediction architecture
NASA Astrophysics Data System (ADS)
Lehman, Lynn A.; Krause, Lee S.
2004-09-01
This paper will evaluate the feasibility of constructing a system to support intelligence analysts engaged in counter-terrorism. It will discuss the use of emerging techniques to evaluate a large-scale threat data repository (or Infosphere) and comparing analyst developed models to identify and discover potential threat-related activity with a uncertainty metric used to evaluate the threat. This system will also employ the use of psychological (or intent) modeling to incorporate combatant (i.e. terrorist) beliefs and intent. The paper will explore the feasibility of constructing a hetero-hierarchical (a hierarchy of more than one kind or type characterized by loose connection/feedback among elements of the hierarchy) agent based framework or "family of agents" to support "evidence retrieval" defined as combing, or searching the threat data repository and returning information with an uncertainty metric. The counter-terrorism threat prediction architecture will be guided by a series of models, constructed to represent threat operational objectives, potential targets, or terrorist objectives. The approach would compare model representations against information retrieved by the agent family to isolate or identify patterns that match within reasonable measures of proximity. The central areas of discussion will be the construction of an agent framework to search the available threat related information repository, evaluation of results against models that will represent the cultural foundations, mindset, sociology and emotional drive of typical threat combatants (i.e. the mind and objectives of a terrorist), and the development of evaluation techniques to compare result sets with the models representing threat behavior and threat targets. The applicability of concepts surrounding Modeling Field Theory (MFT) will be discussed as the basis of this research into development of proximity measures between the models and result sets and to provide feedback in support of model adaptation (learning). The increasingly complex demands facing analysts evaluating activity threatening to the security of the United States make the family of agent-based data collection (fusion) a promising area. This paper will discuss a system to support the collection and evaluation of potential threat activity as well as an approach fro presentation of the information.
ERIC Educational Resources Information Center
Richards, Cameron
2006-01-01
For various reasons many teachers struggle to harness the powerful informational, communicative and interactive learning possibilities of information and communication technologies (ICTs) in general. This is perhaps typified by how e-learning platforms and web portals are often used mainly as repositories for content and related online discussion…
Web-Based Learning Materials for Higher Education: The MERLOT Repository
ERIC Educational Resources Information Center
Orhun, Emrah
2004-01-01
MERLOT (Multimedia Educational Resource for Learning and Online Teaching) is a web-based open resource designed primarily for faculty and students in higher education. The resources in MERLOT include over 8,000 learning materials and support materials from a wide variety of disciplines that can be integrated within the context of a larger course.…
ERIC Educational Resources Information Center
Roberts, Pauline; Maor, Dorit; Herrington, Jan
2016-01-01
In addition to providing a useful repository for learning products, ePortfolios provide enhanced opportunities for the development of advanced learning skills. It can be argued, however, that ePortfolios are not being implemented effectively towards fulfilling this important function. This paper presents an investigation of an ePortfolio…
Object linking in repositories
NASA Technical Reports Server (NTRS)
Eichmann, David (Editor); Beck, Jon; Atkins, John; Bailey, Bill
1992-01-01
This topic is covered in three sections. The first section explores some of the architectural ramifications of extending the Eichmann/Atkins lattice-based classification scheme to encompass the assets of the full life cycle of software development. A model is considered that provides explicit links between objects in addition to the edges connecting classification vertices in the standard lattice. The second section gives a description of the efforts to implement the repository architecture using a commercially available object-oriented database management system. Some of the features of this implementation are described, and some of the next steps to be taken to produce a working prototype of the repository are pointed out. In the final section, it is argued that design and instantiation of reusable components have competing criteria (design-for-reuse strives for generality, design-with-reuse strives for specificity) and that providing mechanisms for each can be complementary rather than antagonistic. In particular, it is demonstrated how program slicing techniques can be applied to customization of reusable components.
Can Data Repositories Help Find Effective Treatments for Complex Diseases?
Farber, Gregory K.
2016-01-01
There are many challenges to developing treatments for complex diseases. This review explores the question of whether it is possible to imagine a data repository that would increase the pace of understanding complex diseases sufficiently well to facilitate the development of effective treatments. First, consideration is given to the amount of data that might be needed for such a data repository and whether the existing data storage infrastructure is enough. Several successful data repositories are then examined to see if they have common characteristics. An area of science where unsuccessful attempts to develop a data infrastructure is then described to see what lessons could be learned for a data repository devoted to complex disease. Then, a variety of issues related to sharing data are discussed. In some of these areas, it is reasonably clear how to move forward. In other areas, there are significant open questions that need to be addressed by all data repositories. Using that baseline information, the question of whether data archives can be effective in understanding a complex disease is explored. The major goal of such a data archive is likely to be identifying biomarkers that define sub-populations of the disease. PMID:27018167
Can data repositories help find effective treatments for complex diseases?
Farber, Gregory K
2017-05-01
There are many challenges to developing treatments for complex diseases. This review explores the question of whether it is possible to imagine a data repository that would increase the pace of understanding complex diseases sufficiently well to facilitate the development of effective treatments. First, consideration is given to the amount of data that might be needed for such a data repository and whether the existing data storage infrastructure is enough. Several successful data repositories are then examined to see if they have common characteristics. An area of science where unsuccessful attempts to develop a data infrastructure is then described to see what lessons could be learned for a data repository devoted to complex disease. Then, a variety of issues related to sharing data are discussed. In some of these areas, it is reasonably clear how to move forward. In other areas, there are significant open questions that need to be addressed by all data repositories. Using that baseline information, the question of whether data archives can be effective in understanding a complex disease is explored. The major goal of such a data archive is likely to be identifying biomarkers that define sub-populations of the disease. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manteufel, R.D.; Ahola, M.P.; Turner, D.R.
A literature review has been conducted to determine the state of knowledge available in the modeling of coupled thermal (T), hydrologic (H), mechanical (M), and chemical (C) processes relevant to the design and/or performance of the proposed high-level waste (HLW) repository at Yucca Mountain, Nevada. The review focuses on identifying coupling mechanisms between individual processes and assessing their importance (i.e., if the coupling is either important, potentially important, or negligible). The significance of considering THMC-coupled processes lies in whether or not the processes impact the design and/or performance objectives of the repository. A review, such as reported here, is usefulmore » in identifying which coupled effects will be important, hence which coupled effects will need to be investigated by the US Nuclear Regulatory Commission in order to assess the assumptions, data, analyses, and conclusions in the design and performance assessment of a geologic reposit``. Although this work stems from regulatory interest in the design of the geologic repository, it should be emphasized that the repository design implicitly considers all of the repository performance objectives, including those associated with the time after permanent closure. The scope of this review is considered beyond previous assessments in that it attempts with the current state-of-knowledge) to determine which couplings are important, and identify which computer codes are currently available to model coupled processes.« less
Using neural networks in software repositories
NASA Technical Reports Server (NTRS)
Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.
1992-01-01
The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.
... Young Adults School Issues TSC Clinical Trials Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare Professionals Often undetected. Easily ...
An Objective Comparison of Cell Tracking Algorithms
Ulman, Vladimír; Maška, Martin; Magnusson, Klas E. G.; Ronneberger, Olaf; Haubold, Carsten; Harder, Nathalie; Matula, Pavel; Matula, Petr; Svoboda, David; Radojevic, Miroslav; Smal, Ihor; Rohr, Karl; Jaldén, Joakim; Blau, Helen M.; Dzyubachyk, Oleh; Lelieveldt, Boudewijn; Xiao, Pengdong; Li, Yuexiang; Cho, Siu-Yeung; Dufour, Alexandre C.; Olivo-Marin, Jean-Christophe; Reyes-Aldasoro, Constantino C.; Solis-Lemus, Jose A.; Bensch, Robert; Brox, Thomas; Stegmaier, Johannes; Mikut, Ralf; Wolf, Steffen; Hamprecht, Fred. A.; Esteves, Tiago; Quelhas, Pedro; Demirel, Ömer; Malmström, Lars; Jug, Florian; Tomancak, Pavel; Meijering, Erik; Muñoz-Barrutia, Arrate; Kozubek, Michal; Ortiz-de-Solorzano, Carlos
2017-01-01
We present a combined report on the results of three editions of the Cell Tracking Challenge, an ongoing initiative aimed at promoting the development and objective evaluation of cell tracking algorithms. With twenty-one participating algorithms and a data repository consisting of thirteen datasets of various microscopy modalities, the challenge displays today’s state of the art in the field. We analyze the results using performance measures for segmentation and tracking that rank all participating methods. We also analyze the performance of all algorithms in terms of biological measures and their practical usability. Even though some methods score high in all technical aspects, not a single one obtains fully correct solutions. We show that methods that either take prior information into account using learning strategies or analyze cells in a global spatio-temporal video context perform better than other methods under the segmentation and tracking scenarios included in the challenge. PMID:29083403
An objective comparison of cell-tracking algorithms.
Ulman, Vladimír; Maška, Martin; Magnusson, Klas E G; Ronneberger, Olaf; Haubold, Carsten; Harder, Nathalie; Matula, Pavel; Matula, Petr; Svoboda, David; Radojevic, Miroslav; Smal, Ihor; Rohr, Karl; Jaldén, Joakim; Blau, Helen M; Dzyubachyk, Oleh; Lelieveldt, Boudewijn; Xiao, Pengdong; Li, Yuexiang; Cho, Siu-Yeung; Dufour, Alexandre C; Olivo-Marin, Jean-Christophe; Reyes-Aldasoro, Constantino C; Solis-Lemus, Jose A; Bensch, Robert; Brox, Thomas; Stegmaier, Johannes; Mikut, Ralf; Wolf, Steffen; Hamprecht, Fred A; Esteves, Tiago; Quelhas, Pedro; Demirel, Ömer; Malmström, Lars; Jug, Florian; Tomancak, Pavel; Meijering, Erik; Muñoz-Barrutia, Arrate; Kozubek, Michal; Ortiz-de-Solorzano, Carlos
2017-12-01
We present a combined report on the results of three editions of the Cell Tracking Challenge, an ongoing initiative aimed at promoting the development and objective evaluation of cell segmentation and tracking algorithms. With 21 participating algorithms and a data repository consisting of 13 data sets from various microscopy modalities, the challenge displays today's state-of-the-art methodology in the field. We analyzed the challenge results using performance measures for segmentation and tracking that rank all participating methods. We also analyzed the performance of all of the algorithms in terms of biological measures and practical usability. Although some methods scored high in all technical aspects, none obtained fully correct solutions. We found that methods that either take prior information into account using learning strategies or analyze cells in a global spatiotemporal video context performed better than other methods under the segmentation and tracking scenarios included in the challenge.
SemanticOrganizer: A Customizable Semantic Repository for Distributed NASA Project Teams
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Berrios, Daniel C.; Carvalho, Robert E.; Hall, David R.; Rich, Stephen J.; Sturken, Ian B.; Swanson, Keith J.; Wolfe, Shawn R.
2004-01-01
SemanticOrganizer is a collaborative knowledge management system designed to support distributed NASA projects, including diverse teams of scientists, engineers, and accident investigators. The system provides a customizable, semantically structured information repository that stores work products relevant to multiple projects of differing types. SemanticOrganizer is one of the earliest and largest semantic web applications deployed at NASA to date, and has been used in diverse contexts ranging from the investigation of Space Shuttle Columbia's accident to the search for life on other planets. Although the underlying repository employs a single unified ontology, access control and ontology customization mechanisms make the repository contents appear different for each project team. This paper describes SemanticOrganizer, its customization facilities, and a sampling of its applications. The paper also summarizes some key lessons learned from building and fielding a successful semantic web application across a wide-ranging set of domains with diverse users.
Audit and Certification Process for Science Data Digital Repositories
NASA Astrophysics Data System (ADS)
Hughes, J. S.; Giaretta, D.; Ambacher, B.; Ashley, K.; Conrad, M.; Downs, R. R.; Garrett, J.; Guercio, M.; Lambert, S.; Longstreth, T.; Sawyer, D. M.; Sierman, B.; Tibbo, H.; Waltz, M.
2011-12-01
Science data digital repositories are entrusted to ensure that a science community's data are available and useful to users both today and in the future. Part of the challenge in meeting this responsibility is identifying the standards, policies and procedures required to accomplish effective data preservation. Subsequently a repository should be evaluated on whether or not they are effective in their data preservation efforts. This poster will outline the process by which digital repositories are being formally evaluated in terms of their ability to preserve the digitally encoded information with which they have been entrusted. The ISO standards on which this is based will be identified and the relationship of these standards to the Open Archive Information System (OAIS) reference model will be shown. Six test audits have been conducted with three repositories in Europe and three in the USA. Some of the major lessons learned from these test audits will be briefly described. An assessment of the possible impact of this type of audit and certification on the practice of preserving digital information will also be provided.
Supporting Social Awareness in Collaborative E-Learning
ERIC Educational Resources Information Center
Lambropoulos, Niki; Faulkner, Xristine; Culwin, Fintan
2012-01-01
In the last decade, we have seen the emergence of virtual learning environments. Initially, these environments were a little more than document repositories that tutor used unicast to the students. Informed in part by social constructivist theories of education, later environments included capabilities for tutor-student and student-student,…
Retrieving Online Language Learning Resources: Classification and Quality
ERIC Educational Resources Information Center
Krajcso, Zita; Frimmel, Ulrike
2017-01-01
Foreign language teachers and learners use digital repositories frequently to find appropriate activities for their teaching and learning activities. The question is: How can content providers support them in finding exactly what they need and in retrieving high quality resources? This question has been discussed in the literature and in the…
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
Geoscientific Site Evaluation Approach for Canada's Deep Geological Repository for Used Nuclear Fuel
NASA Astrophysics Data System (ADS)
Sanchez-Rico Castejon, M.; Hirschorn, S.; Ben Belfadhel, M.
2015-12-01
The Nuclear Waste Management Organization (NWMO) is responsible for implementing Adaptive Phased Management, the approach selected by the Government of Canada for long-term management of used nuclear fuel generated by Canadian nuclear reactors. The ultimate objective of APM is the centralized containment and isolation of Canada's used nuclear fuel in a Deep Geological Repository in a suitable crystalline or sedimentary rock formation. In May 2010, the NWMO published and initiated a nine-step site selection process to find an informed and willing community to host a deep geological repository for Canada's used nuclear fuel. The site selection process is designed to address a broad range of technical and social, economic and cultural factors. The site evaluation process includes three main technical evaluation steps: Initial Screenings; Preliminary Assessments; and Detailed Site Characterizations, to assess the suitability of candidate areas in a stepwise manner over a period of many years. By the end of 2012, twenty two communities had expressed interest in learning more about the project. As of July 2015, nine communities remain in the site selection process. To date (July 2015), NWMO has completed Initial Screenings for the 22 communities that expressed interest, and has completed the first phase of Preliminary Assessments (desktop) for 20 of the communities. Phase 2 of the Preliminary Assessments has been initiated in a number of communities, with field activities such as high-resolution airborne geophysical surveys and geological mapping. This paper describes the approach, methods and criteria being used to assess the geoscientific suitability of communities currently involved in the site selection process.
Geoscience parameter data base handbook: granites and basalts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-12-01
The Department of Energy has the responsibility for selecting and constructing Federal repositories for radioactive waste. The Nuclear Regulatory Commission must license such repositories prior to construction. The basic requirement in the geologic disposal of radioactive waste is stated as: placement in a geologic host whereby the radioactive waste is not in mechanical, thermal or chemical equilibrium with the object of preventing physical or chemical migration of radionuclides into the biosphere or hydrosphere in hazardous concentration (USGS, 1977). The object of this report is to document the known geologic parameters of large granite and basalt occurrences in the coterminous Unitedmore » States, for future evaluation in the selection and licensing of radioactive waste repositories. The description of the characteristics of certain potential igneous hosts has been limited to existing data pertaining to the general geologic character, geomechanics, and hydrology of identified occurrences. A description of the geochemistry is the subject of a separate report.« less
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
Building a Trustworthy Environmental Science Data Repository: Lessons Learned from the ORNL DAAC
NASA Astrophysics Data System (ADS)
Wei, Y.; Santhana Vannan, S. K.; Boyer, A.; Beaty, T.; Deb, D.; Hook, L.
2017-12-01
The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC, https://daac.ornl.gov) for biogeochemical dynamics is one of NASA's Earth Observing System Data and Information System (EOSDIS) data centers. The mission of the ORNL DAAC is to assemble, distribute, and provide data services for a comprehensive archive of terrestrial biogeochemistry and ecological dynamics observations and models to facilitate research, education, and decision-making in support of NASA's Earth Science. Since its establishment in 1994, ORNL DAAC has been continuously building itself into a trustworthy environmental science data repository by not only ensuring the quality and usability of its data holdings, but also optimizing its data publication and management process. This paper describes the lessons learned from ORNL DAAC's effort toward this goal. ORNL DAAC has been proactively implementing international community standards throughout its data management life cycle, including data publication, preservation, discovery, visualization, and distribution. Data files in standard formats, detailed documentation, and metadata following standard models are prepared to improve the usability and longevity of data products. Assignment of a Digital Object Identifier (DOI) ensures the identifiability and accessibility of every data product, including the different versions and revisions of its life cycle. ORNL DAAC's data citation policy assures data producers receive appropriate recognition of use of their products. Web service standards, such as OpenSearch and Open Geospatial Consortium (OGC), promotes the discovery, visualization, distribution, and integration of ORNL DAAC's data holdings. Recently, ORNL DAAC began efforts to optimize and standardize its data archival and data publication workflows, to improve the efficiency and transparency of its data archival and management processes.
LingoBee--Crowd-Sourced Mobile Language Learning in the Cloud
ERIC Educational Resources Information Center
Petersen, Sobah Abbas; Procter-Legg, Emma; Cacchione, Annamaria
2013-01-01
This paper describes three case studies, where language learners were invited to use "LingoBee" as a means of supporting their language learning. LingoBee is a mobile app that provides user-generated language content in a cloud-based shared repository. Assuming that today's students are mobile savvy and "Digital Natives" able…
Bottomley, Steven; Denny, Paul
2011-01-01
A participatory learning approach, combined with both a traditional and a competitive assessment, was used to motivate students and promote a deep approach to learning biochemistry. Students were challenged to research, author, and explain their own multiple-choice questions (MCQs). They were also required to answer, evaluate, and discuss MCQs written by their peers. The technology used to support this activity was PeerWise--a freely available, innovative web-based system that supports students in the creation of an annotated question repository. In this case study, we describe students' contributions to, and perceptions of, the PeerWise system for a cohort of 107 second-year biomedical science students from three degree streams studying a core biochemistry subject. Our study suggests that the students are eager participants and produce a large repository of relevant, good quality MCQs. In addition, they rate the PeerWise system highly and use higher order thinking skills while taking an active role in their learning. We also discuss potential issues and future work using PeerWise for biomedical students. Copyright © 2011 Wiley Periodicals, Inc.
Sharing e-Learning Experiences: A Personalised Approach
NASA Astrophysics Data System (ADS)
Clematis, Andrea; Forcheri, Paola; Ierardi, Maria Grazia; Quarati, Alfonso
A two-tier architecture is presented, based on hybrid peer-to-peer technology, aimed at providing personalized access to heterogeneous learning sources. The architecture deploys a conceptual model that is superimposed over logically and physically separated repositories. The model is based on the interactions between users and learning resources, described by means of coments. To support users to find out material satisfying their needs, mechanisms for ranking resources and for extracting personalized views of the learning space are provided.
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
Bytautas, Jessica P; Gheihman, Galina; Dobrow, Mark J
2017-04-01
Quality improvement (QI) is becoming an important focal point for health systems. There is increasing interest among health system stakeholders to learn from and share experiences on the use of QI methods and approaches in their work. Yet there are few easily accessible, online repositories dedicated to documenting QI activity. We conducted a scoping review of publicly available, web-based QI repositories to (i) identify current approaches to sharing information on QI practices; (ii) categorise these approaches based on hosting, scope and size, content acquisition and eligibility, content format and search, and evaluation and engagement characteristics; and (iii) review evaluations of the design, usefulness and impact of their online QI practice repositories. The search strategy consisted of traditional database and grey literature searches, as well as expert consultation, with the ultimate aim of identifying and describing QI repositories of practices undertaken in a healthcare context. We identified 13 QI repositories and found substantial variation across the five categories. The QI repositories used different terminology (eg, practices vs case studies) and approaches to content acquisition, and varied in terms of primary areas of focus. All provided some means for organising content according to categories or themes and most provided at least rudimentary keyword search functionality. Notably, none of the QI repositories included evaluations of their impact. With growing interest in sharing and spreading best practices and increasing reliance on QI as a key contributor to health system performance, the role of QI repositories is likely to expand. Designing future QI repositories based on knowledge of the range and type of features available is an important starting point for improving their usefulness and impact. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
IAU astroEDU: an open-access platform for peer-reviewed astronomy education activities
NASA Astrophysics Data System (ADS)
Heenatigala, Thilina; Russo, Pedro; Strubbe, Linda; Gomez, Edward
2015-08-01
astroEDU is an open access platform for peer-reviewed astronomy education activities. It addresses key problems in educational repositories such as variability in quality, not maintained or updated regularly, limited content review, and more. This is achieved through a peer-review process similar to what scholarly articles are based on. Activities submitted are peer-reviewed by an educator and a professional astronomer which gives the credibility to the activities. astroEDU activities are open-access in order to make the activities accessible to educators around the world while letting them discover, review, distribute and remix the activities. The activity submission process allows authors to learn how to apply enquiry-based learning into the activity, identify the process skills required, how to develop core goals and objectives, and how to evaluate the activity to determine the outcome. astroEDU is endorsed by the International Astronomical Union meaning each activity is given an official stamp by the international organisation for professional astronomers.
Public acceptance for centralized storage and repositories of low-level waste session (Panel)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lutz, H.R.
1995-12-31
Participants from various parts of the world will provide a summary of their particular country`s approach to low-level waste management and the cost of public acceptance for low-level waste management facilities. Participants will discuss the number, geographic location, and type of low-level waste repositories and centralized storage facilities located in their countries. Each will discuss the amount, distribution, and duration of funds to gain public acceptance of these facilities. Participants will provide an estimated $/meter for centralized storage facilities and repositories. The panel will include a brief discussion about the ethical aspects of public acceptance costs, approaches for negotiating acceptance,more » and lessons learned in each country. The audience is invited to participate in the discussion.« less
Accessing Distributed Learning Repositories through a Couseware Watchdog.
ERIC Educational Resources Information Center
Schmitz, Christoph; Staab, Steffen; Studer, Rudi; Stumme, Gerd; Tane, Julien
Topics in education are changing with an ever faster pace. Especially in the field of lifeling learning, the aspects that need to be taught by information providers must keep up to date with emerging topics. The Courseware watchdog is a comprehensive module that allows users to focus on existing subfields of a discipline, but thereby be aware of…
Creativity and Mobile Language Learning Using LingoBee
ERIC Educational Resources Information Center
Petersen, Sobah Abbas; Procter-Legg, Emma; Cacchione, Annamaria
2013-01-01
In this paper, the authors explore the ideas of mobility and creativity through the use of LingoBee, a mobile app for situated language learning. LingoBee is based on ideas from crowd-sourcing and social networking to support language learners. Learners are able to create their own content and share it with other learners through a repository. The…
Exploring Teacher Use of an Online Forum to Develop Game-Based Learning Literacy
ERIC Educational Resources Information Center
Barany, Amanda; Shah, Mamta; Foster, Aroutis
2017-01-01
Game-based learning researchers have emphasized the importance of teachers' game literacy and knowledge of pedagogical approaches involved in successfully adopting an instructional approach (Bell and Gresalfi, 2017). In this paper, we describe findings from an online resource that teachers used to generate a repository of games for use both during…
Improving Graduate Students' Learning through the Use of Moodle
ERIC Educational Resources Information Center
Olmos, Susana; Mena, Juanjo; Torrecilla, Eva; Iglesias, Ana
2015-01-01
Moodle stands as an online tool that promotes enhanced learning in higher education. However, it often becomes a repository of contents instead of an interactive environment. In this paper we describe how this platform was used by university students and teachers in 104 courses and compare whether ICT--as core subject courses--use Moodle more…
Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction
NASA Astrophysics Data System (ADS)
Zhang, W.; Li, X.; Xiao, W.
2018-05-01
The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.
Types of Seizures Affecting Individuals with TSC
... Find Local Resources Publications Webinars and Videos Biosample Repository Patient-Focused Drug Development Learn Engage Donate Healthcare ... and Funding Preclinical Research Natural History Database Biosample ... Research Consortium Research Conferences Research Resources International ...
Preservation of Digital Objects.
ERIC Educational Resources Information Center
Galloway, Patricia
2004-01-01
Presents a literature review that covers the following topics related to preservation of digital objects: practical examples; stakeholders; recordkeeping standards; genre-specific problems; trusted repository standards; preservation methods; preservation metadata standards; and future directions. (Contains 82 references.) (MES)
Radio Model-free Noise Reduction of Radio Transmissions with Convolutional Autoencoders
2016-09-01
Encoder-Decoder Architecture for Image Segmentation .” Cornell University Library. Computing Research Repository (CoRR). abs/1511.00561. 2. Anthony J. Bell...Aaron C Courville, and Pascal Vincent. 2012. “Unsupervised Feature Learning and Deep Learning : A Review and New Perspectives.” Cornell University...Linux Journal 122(June):1–4. 5. Francois Chollet. 2015.“Keras: Deep Learning Library for TensorFlow and Theano.” Available online at https://github.com
Using CASE to Adopt Organizational Learning at NASA
NASA Technical Reports Server (NTRS)
Templeton, Gary F.
2003-01-01
The research direction was articulated in a statement of work created in collaboration between two program colleagues, an outside researcher and an internal user. The researcher was to deliver an implemented CASE tool (CasewiseTM) that was to be used to serve non-traditional (i.e., not software development related) organizational purposes. The explicitly stated functions of the tool were the support of 1) ISO-9000 compliance in the documentation of processes and 2) the management of process improvement. The collaborative team consisted of the researcher (GT), a full-time accompanying student (CRO), and the user (JD). The team originally focused on populating the CASE repository for the purpose of solving the two primary objectives. Consistent with the action research approach, several additional user requirements emerged as the project evolved, needs became apparent in discussions about how the tool would be used to solve organizational problems. These deliverables were contained within the CASE repository: 1) the creation of a paradigm diagram 2) the creation of a context diagram 3) the creation of child diagrams 4) the generation of 73 issues relating to organizational change 5) a compendium of stakeholder interview transcripts All record keeping was done manually and then keyed into the CASE interface. An issue is the difference between an organization s current situation (action) and its collective ideals.
Native Americans and state and local governments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rusco, E.R.
1991-10-01
Native Americans` concerns arising from the possibility of establishment of a nuclear repository for high level wastes at Yucca Mountain fall principally into two main categories. First, the strongest objection to the repository comes from traditional Western Shoshones. Their objections are based on a claim that the Western Shoshones still own Yucca Mountain and also on the assertion that putting high level nuclear wastes into the ground is a violation of their religious views regarding nature. Second, there are several reservations around the Yucca Mountain site that might be affected in various ways by building of the repository. There ismore » a question about how many such reservations there are, which can only be decided when more information is available. This report discusses two questions: the bearing of the continued vigorous assertion by traditionalist Western Shoshones of their land claim; and the extent to which Nevada state and local governments are able to understand and represent Indian viewpoints about Yucca Mountain.« less
Building Scientific Data's list of recommended data repositories
NASA Astrophysics Data System (ADS)
Hufton, A. L.; Khodiyar, V.; Hrynaszkiewicz, I.
2016-12-01
When Scientific Data launched in 2014 we provided our authors with a list of recommended data repositories to help them identify data hosting options that were likely to meet the journal's requirements. This list has grown in size and scope, and is now a central resource for authors across the Nature-titled journals. It has also been used in the development of data deposition policies and recommended repository lists across Springer Nature and at other publishers. Each new addition to the list is assessed according to a series of criteria that emphasize the stability of the resource, its commitment to principles of open science and its implementation of relevant community standards and reporting guidelines. A preference is expressed for repositories that issue digital object identifiers (DOIs) through the DataCite system and that share data under the Creative Commons CC0 waiver. Scientific Data currently lists fourteen repositories that focus on specific areas within the Earth and environmental sciences, as well as the broad scope repositories, Dryad and figshare. Readers can browse and filter datasets published at the journal by the host repository using ISA-explorer, a demo tool built by the ISA-tools team at Oxford University1. We believe that well-maintained lists like this one help publishers build a network of trust with community data repositories and provide an important complement to more comprehensive data repository indices and more formal certification efforts. In parallel, Scientific Data has also improved its policies to better support submissions from authors using institutional and project-specific repositories, without requiring each to apply for listing individually. Online resources Journal homepage: http://www.nature.com/scientificdata Data repository criteria: http://www.nature.com/sdata/policies/data-policies#repo-criteria Recommended data repositories: http://www.nature.com/sdata/policies/repositories Archived copies of the list: https://dx.doi.org/10.6084/m9.figshare.1434640.v6 Reference Gonzalez-Beltran, A. ISA-explorer: A demo tool for discovering and exploring Scientific Data's ISA-tab metadata. Scientific Data Updates http://blogs.nature.com/scientificdata/2015/12/17/isa-explorer/ (2015).
ERIC Educational Resources Information Center
Coughlan, Tony; Perryman, Leigh-Anne
2011-01-01
This article explores the relationship between academic disciplines' representation in the United Kingdom Open University's (OU) OpenLearn open educational resources (OER) repository and in the OU's fee-paying curriculum. Becher's (1989) typology was used to subdivide the OpenLearn and OU fee-paying curriculum content into four disciplinary…
Shape prior modeling using sparse representation and online dictionary learning.
Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N
2012-01-01
The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentil-Beccot, Anne; Mele, Salvatore; /CERN
Contemporary scholarly discourse follows many alternative routes in addition to the three-century old tradition of publication in peer-reviewed journals. The field of High-Energy Physics (HEP) has explored alternative communication strategies for decades, initially via the mass mailing of paper copies of preliminary manuscripts, then via the inception of the first online repositories and digital libraries. This field is uniquely placed to answer recurrent questions raised by the current trends in scholarly communication: is there an advantage for scientists to make their work available through repositories, often in preliminary form? Is there an advantage to publishing in Open Access journals? Domore » scientists still read journals or do they use digital repositories? The analysis of citation data demonstrates that free and immediate online dissemination of preprints creates an immense citation advantage in HEP, whereas publication in Open Access journals presents no discernible advantage. In addition, the analysis of clickstreams in the leading digital library of the field shows that HEP scientists seldom read journals, preferring preprints instead.« less
NASA Astrophysics Data System (ADS)
Stall, S.
2016-12-01
To be trustworthy is to be reliable, dependable, honest, principled, ethical, incorruptible, and more. A trustworthy person demonstrates these qualities over time and under all circumstances. A trustworthy repository demonstrates these qualities through the team that manages the repository and its responsible organization. The requirements of a Trusted Digital Repository (TDR) in ISO 16363 can be tough to reach and tough to maintain. Challenges include: limited funds, limited resources and/or skills, and an unclear path to successfully achieve the requirements. The ISO standard defines each requirement separately, but a successful certification recognizes that there are many cross-dependencies among the requirements. Understanding these dependencies leads to a more efficient path towards success. At AGU we recognize that reaching the goal of the TDR ISO standard, or any set of data management objectives defined by an organization, has a better chance at success if the organization clearly knows their current capability, the improvements that are needed, and the best way to make (and maintain) those changes. AGU has partnered with the CMMI® Institute to adapt their Data Management Maturity (DMM)SM model within the Earth and space sciences. Using the DMM, AGU developed a new Data Management Assessment Program aimed at helping data repositories, large and small, domain-specific to general, assess and improve data management practices to meet their goals - including becoming a Trustworthy Digital Repository. The requirements to achieve the TDR ISO standard are aligned to the data management best practices defined in the Data Management Maturity (DMM)SM model. Using the DMM as a process improvement tool in conjunction with the Data Management Assessment method, a team seeking the objective of the TDR ISO standard receives a clear road map to achieving their goal as an outcome of the assessment. Publishers and agencies are beginning to recommend or even require that repositories demonstrate that they are practicing best practices or meeting certain standards. Data preserved in a data facility that is working on achieving a TDR standard will have the level of care desired by the publishing community as well as the science community. Better Data Management results in Better Science.
Privacy Impact Assessment for the eDiscovery Service
This system collects Logical Evidence Files, which include data from workstations, laptops, SharePoint and document repositories. Learn how the data is collected, used, who has access, the purpose of data collection, and record retention policies.
Digitizing Dissertations for an Institutional Repository: A Process and Cost Analysis*
Piorun, Mary; Palmer, Lisa A.
2008-01-01
Objective: This paper describes the Lamar Soutter Library's process and costs associated with digitizing 300 doctoral dissertations for a newly implemented institutional repository at the University of Massachusetts Medical School. Methodology: Project tasks included identifying metadata elements, obtaining and tracking permissions, converting the dissertations to an electronic format, and coordinating workflow between library departments. Each dissertation was scanned, reviewed for quality control, enhanced with a table of contents, processed through an optical character recognition function, and added to the institutional repository. Results: Three hundred and twenty dissertations were digitized and added to the repository for a cost of $23,562, or $0.28 per page. Seventy-four percent of the authors who were contacted (n = 282) granted permission to digitize their dissertations. Processing time per title was 170 minutes, for a total processing time of 906 hours. In the first 17 months, full-text dissertations in the collection were downloaded 17,555 times. Conclusion: Locally digitizing dissertations or other scholarly works for inclusion in institutional repositories can be cost effective, especially if small, defined projects are chosen. A successful project serves as an excellent recruitment strategy for the institutional repository and helps libraries build new relationships. Challenges include workflow, cost, policy development, and copyright permissions. PMID:18654648
Gorzalczany, Marian B; Rudzinski, Filip
2017-06-07
This paper presents a generalization of self-organizing maps with 1-D neighborhoods (neuron chains) that can be effectively applied to complex cluster analysis problems. The essence of the generalization consists in introducing mechanisms that allow the neuron chain--during learning--to disconnect into subchains, to reconnect some of the subchains again, and to dynamically regulate the overall number of neurons in the system. These features enable the network--working in a fully unsupervised way (i.e., using unlabeled data without a predefined number of clusters)--to automatically generate collections of multiprototypes that are able to represent a broad range of clusters in data sets. First, the operation of the proposed approach is illustrated on some synthetic data sets. Then, this technique is tested using several real-life, complex, and multidimensional benchmark data sets available from the University of California at Irvine (UCI) Machine Learning repository and the Knowledge Extraction based on Evolutionary Learning data set repository. A sensitivity analysis of our approach to changes in control parameters and a comparative analysis with an alternative approach are also performed.
The NCAR Digital Asset Services Hub (DASH): Implementing Unified Data Discovery and Access
NASA Astrophysics Data System (ADS)
Stott, D.; Worley, S. J.; Hou, C. Y.; Nienhouse, E.
2017-12-01
The National Center for Atmospheric Research (NCAR) Directorate created the Data Stewardship Engineering Team (DSET) to plan and implement an integrated single entry point for uniform digital asset discovery and access across the organization in order to improve the efficiency of access, reduce the costs, and establish the foundation for interoperability with other federated systems. This effort supports new policies included in federal funding mandates, NSF data management requirements, and journal citation recommendations. An inventory during the early planning stage identified diverse asset types across the organization that included publications, datasets, metadata, models, images, and software tools and code. The NCAR Digital Asset Services Hub (DASH) is being developed and phased in this year to improve the quality of users' experiences in finding and using these assets. DASH serves to provide engagement, training, search, and support through the following four nodes (see figure). DASH MetadataDASH provides resources for creating and cataloging metadata to the NCAR Dialect, a subset of ISO 19115. NMDEdit, an editor based on a European open source application, has been configured for manual entry of NCAR metadata. CKAN, an open source data portal platform, harvests these XML records (along with records output directly from databases) from a Web Accessible Folder (WAF) on GitHub for validation. DASH SearchThe NCAR Dialect metadata drives cross-organization search and discovery through CKAN, which provides the display interface of search results. DASH search will establish interoperability by facilitating metadata sharing with other federated systems. DASH ConsultingThe DASH Data Curation & Stewardship Coordinator assists with Data Management (DM) Plan preparation and advises on Digital Object Identifiers. The coordinator arranges training sessions on the DASH metadata tools and DM planning, and provides one-on-one assistance as requested. DASH RepositoryA repository is under development for NCAR datasets currently not in existing lab-managed archives. The DASH repository will be under NCAR governance and meet Trustworthy Repositories Audit & Certification (TRAC) requirements. This poster will highlight the processes, lessons learned, and current status of the DASH effort at NCAR.
Developing the Tools for Geologic Repository Monitoring - Andra's Monitoring R and D Program - 12045
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buschaert, S.; Lesoille, S.; Bertrand, J.
2012-07-01
The French Safety Guide recommends that Andra develop a monitoring program to be implemented during repository construction and conducted until (and possibly after) closure, in order to confirm expected behavior and enhance knowledge of relevant processes. To achieve this, Andra has developed an overall monitoring strategy and identified specific technical objectives to inform disposal process management on evolutions relevant to both the long term safety and reversible, pre-closure management of the repository. Andra has launched an ambitious R and D program to ensure that reliable, durable, metrologically qualified and tested monitoring systems will be available at the time of repositorymore » construction in order to respond to monitoring objectives. After four years of a specific R and D program, first observations are described and recommendations are proposed. The results derived from 4 years of Andra's R and D program allow three main observations to be shared. First, while other industries also invest in monitoring equipment, their obvious emphasis will always be on their specific requirements and needs, thus often only providing a partial match with repository requirements. Examples can be found for all available sensors, which are generally not resistant to radiation. Second, the very close scrutiny anticipated for the geologic disposal process is likely to place an unprecedented emphasis on the quality of monitoring results. It therefore seems important to emphasize specific developments with an aim at providing metrologically qualified systems. Third, adapting existing technology to specific repository needs, and providing adequate proof of their worth, is a lengthy process. In conclusion, it therefore seems prudent to plan ahead and to invest wisely in the adequate development of those monitoring tools that will likely be needed in the repository to respond to the implementers' and regulators' requirements, including those agreed and developed to respond to potential stakeholder expectations. (authors)« less
Simulated Students and Classroom Use of Model-Based Intelligent Tutoring
NASA Technical Reports Server (NTRS)
Koedinger, Kenneth R.
2008-01-01
Two educational uses of models and simulations: 1) Students create models and use simulations ; and 2) Researchers create models of learners to guide development of reliably effective materials. Cognitive tutors simulate and support tutoring - data is crucial to create effective model. Pittsburgh Science of Learning Center: Resources for modeling, authoring, experimentation. Repository of data and theory. Examples of advanced modeling efforts: SimStudent learns rule-based model. Help-seeking model: Tutors metacognition. Scooter uses machine learning detectors of student engagement.
Navigation as a New Form of Search for Agricultural Learning Resources in Semantic Repositories
NASA Astrophysics Data System (ADS)
Cano, Ramiro; Abián, Alberto; Mena, Elena
Education is essential when it comes to raise public awareness on the environmental and economic benefits of organic agriculture and agroecology (OA & AE). Organic.Edunet, an EU funded project, aims at providing a freely-available portal where learning contents on OA & AE can be published and accessed through specialized technologies. This paper describes a novel mechanism for providing semantic capabilities (such as semantic navigational queries) to an arbitrary set of agricultural learning resources, in the context of the Organic.Edunet initiative.
Fukushima Daiichi Information Repository FY13 Status
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis; Phelan, Cherie; Schwieder, Dave
The accident at the Fukushima Daiichi nuclear power station in Japan is one of the most serious in commercial nuclear power plant operating history. Much will be learned that may be applicable to the U.S. reactor fleet, nuclear fuel cycle facilities, and supporting systems, and the international reactor fleet. For example, lessons from Fukushima Daiichi may be applied to emergency response planning, reactor operator training, accident scenario modeling, human factors engineering, radiation protection, and accident mitigation; as well as influence U.S. policies towards the nuclear fuel cycle including power generation, and spent fuel storage, reprocessing, and disposal. This document describesmore » the database used to establish a centralized information repository to store and manage the Fukushima data that has been gathered. The data is stored in a secured (password protected and encrypted) repository that is searchable and available to researchers at diverse locations.« less
OWLing Clinical Data Repositories With the Ontology Web Language
Pastor, Xavier; Lozano, Esther
2014-01-01
Background The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. Objective The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. Methods We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Results Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. Conclusions OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems. PMID:25599697
NASA Astrophysics Data System (ADS)
Downs, R. R.; Chen, R. S.
2011-12-01
Services that preserve and enable future access to scientific data are necessary to ensure that the data that are being collected today will be available for use by future generations of scientists. Many data centers, archives, and other digital repositories are working to improve their ability to serve as long-term stewards of scientific data. Trust in sustainable data management and preservation capabilities of digital repositories can influence decisions to use these services to deposit or obtain scientific data. Building on the Open Archival Information System (OAIS) Reference Model developed by the Consultative Committee for Space Data Systems (CCSDS) and adopted by the International Organization for Standardization as ISO 14721:2003, new standards are being developed to improve long-term data management processes and documentation. The Draft Information Standard ISO/DIS 16363, "Space data and information transfer systems - Audit and certification of trustworthy digital repositories" offers the potential to evaluate digital repositories objectively in terms of their trustworthiness as long-term stewards of digital resources. In conjunction with this, the CCSDS and ISO are developing another draft standard for the auditing and certification process, ISO/DIS 16919, "Space data and information transfer systems - Requirements for bodies providing audit and certification of candidate trustworthy digital repositories". Six test audits were conducted of scientific data centers and archives in Europe and the United States to test the use of these draft standards and identify potential improvements for the standards and for the participating digital repositories. We present a case study of the test audit conducted on the NASA Socioeconomic Data and Applications Center (SEDAC) and describe the preparation, the audit process, recommendations received, and next steps to obtain certification as a trustworthy digital repository, after approval of the ISO/DIS standards.
NASA Astrophysics Data System (ADS)
Maiwald, F.; Vietze, T.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.
2017-02-01
Historical photographs contain high density of information and are of great importance as sources in humanities research. In addition to the semantic indexing of historical images based on metadata, it is also possible to reconstruct geometric information about the depicted objects or the camera position at the time of the recording by employing photogrammetric methods. The approach presented here is intended to investigate (semi-) automated photogrammetric reconstruction methods for heterogeneous collections of historical (city) photographs and photographic documentation for the use in the humanities, urban research and history sciences. From a photogrammetric point of view, these images are mostly digitized photographs. For a photogrammetric evaluation, therefore, the characteristics of scanned analog images with mostly unknown camera geometry, missing or minimal object information and low radiometric and geometric resolution have to be considered. In addition, these photographs have not been created specifically for documentation purposes and so the focus of these images is often not on the object to be evaluated. The image repositories must therefore be subjected to a preprocessing analysis of their photogrammetric usability. Investigations are carried out on the basis of a repository containing historical images of the Kronentor ("crown gate") of the Dresden Zwinger. The initial step was to assess the quality and condition of available images determining their appropriateness for generating three-dimensional point clouds from historical photos using a structure-from-motion evaluation (SfM). Then, the generated point clouds were assessed by comparing them with current measurement data of the same object.
A Linked Dataset of Medical Educational Resources
ERIC Educational Resources Information Center
Dietze, Stefan; Taibi, Davide; Yu, Hong Qing; Dovrolis, Nikolas
2015-01-01
Reusable educational resources became increasingly important for enhancing learning and teaching experiences, particularly in the medical domain where resources are particularly expensive to produce. While interoperability across educational resources metadata repositories is yet limited to the heterogeneity of metadata standards and interface…
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.
2015-10-26
Under the auspices of the International Atomic Energy Agency (IAEA), nationally developed underground research laboratories (URLs) and associated research institutions are being offered for use by other nations. These facilities form an Underground Research Facilities (URF) Network for training in and demonstration of waste disposal technologies and the sharing of knowledge and experience related to geologic repository development, research, and engineering. In order to achieve its objectives, the URF Network regularly sponsors workshops and training events related to the knowledge base that is transferable between existing URL programs and to nations with an interest in developing a new URL. Thismore » report describes the role of URLs in the context of a general timeline for repository development. This description includes identification of key phases and activities that contribute to repository development as a repository program evolves from an early research and development phase to later phases such as construction, operations, and closure. This information is cast in the form of a matrix with the entries in this matrix forming the basis of the URF Network roadmap that will be used to identify and plan future workshops and training events.« less
Metadata management and semantics in microarray repositories.
Kocabaş, F; Can, T; Baykal, N
2011-12-01
The number of microarray and other high-throughput experiments on primary repositories keeps increasing as do the size and complexity of the results in response to biomedical investigations. Initiatives have been started on standardization of content, object model, exchange format and ontology. However, there are backlogs and inability to exchange data between microarray repositories, which indicate that there is a great need for a standard format and data management. We have introduced a metadata framework that includes a metadata card and semantic nets that make experimental results visible, understandable and usable. These are encoded in syntax encoding schemes and represented in RDF (Resource Description Frame-word), can be integrated with other metadata cards and semantic nets, and can be exchanged, shared and queried. We demonstrated the performance and potential benefits through a case study on a selected microarray repository. We concluded that the backlogs can be reduced and that exchange of information and asking of knowledge discovery questions can become possible with the use of this metadata framework.
Exploiting the HASH Planetary Nebula Research Platform
NASA Astrophysics Data System (ADS)
Parker, Quentin A.; Bojičić, Ivan; Frew, David J.
2017-10-01
The HASH (Hong Kong/ AAO/ Strasbourg/ Hα) planetary nebula research platform is a unique data repository with a graphical interface and SQL capability that offers the community powerful, new ways to undertake Galactic PN studies. HASH currently contains multi-wavelength images, spectra, positions, sizes, morphologies and other data whenever available for 2401 true, 447 likely, and 692 possible Galactic PNe, for a total of 3540 objects. An additional 620 Galactic post-AGB stars, pre-PNe, and PPN candidates are included. All objects were classified and evaluated following the precepts and procedures established and developed by our group over the last 15 years. The complete database contains over 6,700 Galactic objects including the many mimics and related phenomena previously mistaken or confused with PNe. Curation and updating currently occurs on a weekly basis to keep the repository as up to date as possible until the official release of HASH v1 planned in the near future.
Kashyap, Vipul; Morales, Alfredo; Hongsermeier, Tonya
2006-01-01
We present an approach and architecture for implementing scalable and maintainable clinical decision support at the Partners HealthCare System. The architecture integrates a business rules engine that executes declarative if-then rules stored in a rule-base referencing objects and methods in a business object model. The rules engine executes object methods by invoking services implemented on the clinical data repository. Specialized inferences that support classification of data and instances into classes are identified and an approach to implement these inferences using an OWL based ontology engine is presented. Alternative representations of these specialized inferences as if-then rules or OWL axioms are explored and their impact on the scalability and maintenance of the system is presented. Architectural alternatives for integration of clinical decision support functionality with the invoking application and the underlying clinical data repository; and their associated trade-offs are discussed and presented.
Using a blog as an integrated eLearning tool and platform.
Goh, Poh Sun
2016-06-01
Technology enhanced learning or eLearning allows educators to expand access to educational content, promotes engagement with students and makes it easier for students to access educational material at a time, place and pace which suits them. The challenge for educators beginning their eLearning journey is to decide where to start, which includes the choice of an eLearning tool and platform. This article will share one educator's decision making process, and experience using blogs as a flexible and versatile integrated eLearning tool and platform. Apart from being a cost effective/free tool and platform, blogs offer the possibility of creating a hyperlinked indexed content repository, for both created and curated educational material; as well as a distribution and engagement tool and platform. Incorporating pedagogically sound activities and educational practices into a blog promote a structured templated teaching process, which can be reproduced. Moving from undergraduate to postgraduate training, educational blogs supported by a comprehensive online case-based repository offer the possibility of training beyond competency towards proficiency and expert level performance through a process of deliberate practice. By documenting educational content and the student engagement and learning process, as well as feedback and personal reflection of educational sessions, blogs can also form the basis for a teaching portfolio, and provide evidence and data of scholarly teaching and educational scholarship. Looking into the future, having a collection of readily accessible indexed hyperlinked teaching material offers the potential to do on the spot teaching with illustrative material called up onto smart surfaces, and displayed on holographic interfaces.
Microservices in Web Objects Enabled IoT Environment for Enhancing Reusability
Chong, Ilyoung
2018-01-01
In the ubiquitous Internet of Things (IoT) environment, reusing objects instead of creating new one has become important in academics and industries. The situation becomes complex due to the availability of a huge number of connected IoT objects, and each individual service creates a new object instead of reusing the existing one to fulfill a requirement. A well-standard mechanism not only improves the reusability of objects but also improves service modularity and extensibility, and reduces cost. Web Objects enabled IoT environment applies the principle of reusability of objects in multiple IoT application domains through central objects repository and microservices. To reuse objects with microservices and to maintain a relationship with them, this study presents an architecture of Web of Objects platform. In the case of a similar request for an object, the already instantiated object that exists in the same or from other domain can be reused. Reuse of objects through microservices avoids duplications, and reduces time to search and instantiate them from their registries. Further, this article presents an algorithm for microservices and related objects discovery that considers the reusability of objects through the central objects repository. To support the reusability of objects, the necessary algorithm for objects matching is also presented. To realize the reusability of objects in Web Objects enabled IoT environment, a prototype has been designed and implemented based on a use case scenario. Finally, the results of the prototype have been analyzed and discussed to validate the proposed approach. PMID:29373491
Microservices in Web Objects Enabled IoT Environment for Enhancing Reusability.
Jarwar, Muhammad Aslam; Kibria, Muhammad Golam; Ali, Sajjad; Chong, Ilyoung
2018-01-26
In the ubiquitous Internet of Things (IoT) environment, reusing objects instead of creating new one has become important in academics and industries. The situation becomes complex due to the availability of a huge number of connected IoT objects, and each individual service creates a new object instead of reusing the existing one to fulfill a requirement. A well-standard mechanism not only improves the reusability of objects but also improves service modularity and extensibility, and reduces cost. Web Objects enabled IoT environment applies the principle of reusability of objects in multiple IoT application domains through central objects repository and microservices. To reuse objects with microservices and to maintain a relationship with them, this study presents an architecture of Web of Objects platform. In the case of a similar request for an object, the already instantiated object that exists in the same or from other domain can be reused. Reuse of objects through microservices avoids duplications, and reduces time to search and instantiate them from their registries. Further, this article presents an algorithm for microservices and related objects discovery that considers the reusability of objects through the central objects repository. To support the reusability of objects, the necessary algorithm for objects matching is also presented. To realize the reusability of objects in Web Objects enabled IoT environment, a prototype has been designed and implemented based on a use case scenario. Finally, the results of the prototype have been analyzed and discussed to validate the proposed approach.
Repository Drift Backfilling Demonstrator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Londe, I.; Dubois, J.Ph.; Bauer, C.
2008-07-01
The 'Backfilling Demonstrator' is one of the technological demonstrators developed by ANDRA in the framework of the feasibility studies for a geological repository for high-level long-lived (HL-LL waste) within a clay formation. The demonstrator concerns the standard and supporting backfills as defined in Andra's 2005 design. The standard backfill is intended to fill up almost all drifts of the underground repository in order to limit any deformation of the rock after the degradation of the drift lining. The supporting backfill only concerns a small portion of the volume to be backfilled in order to counter the swelling pressure of themore » swelling clay contained in the sealing structures. The first objective of the demonstrator was to show the possibility of manufacturing a satisfactory backfill, in spite of the exiguity of the underground structures, and of reusing as much as possible the argillite muck. For the purpose of this experiment, the argillite muck was collected on Andra's work-site for the implementation of an underground research laboratory. Still ongoing, the second objective is to follow up the long-term evolution of the backfill. Approximately 200 m{sup 3} of compacted backfill material have been gathered in a large concrete tube simulating a repository drift. The standard backfill was manufactured exclusively with argillite. The supporting backfill was made by forming a mixture of argillite and sand. Operations were carried out mostly at Richwiller, close to Mulhouse, France. The objectives of the demonstrator were met: an application method was tested and proven satisfactory. The resulting dry densities are relatively high, although the moduli of deformation do not always reach the set goal. The selected objective for the demonstrator was a dry density corresponding to a relatively high compaction level (95% of the standard Proctor optimum [SPO]), for both pure argillite and the argillite-sand mixture. The plate-percussion compaction technique was used and proved satisfactory. The measured dry densities are higher than the 95%-SPO objective. The implementation rates remain very low due to the experimental conditions involved. The metal supply mode would need to be revised before any industrial application is contemplated. The Demonstrator Program started in August 2004 and is followed up today over the long term. With that objective in mind, sensors and a water-saturation system have been installed. (author)« less
A Safety Case Approach for Deep Geologic Disposal of DOE HLW and DOE SNF in Bedded Salt - 13350
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sevougian, S. David; MacKinnon, Robert J.; Leigh, Christi D.
2013-07-01
The primary objective of this study is to investigate the feasibility and utility of developing a defensible safety case for disposal of United States Department of Energy (U.S. DOE) high-level waste (HLW) and DOE spent nuclear fuel (SNF) in a conceptual deep geologic repository that is assumed to be located in a bedded salt formation of the Delaware Basin [1]. A safety case is a formal compilation of evidence, analyses, and arguments that substantiate and demonstrate the safety of a proposed or conceptual repository. We conclude that a strong initial safety case for potential licensing can be readily compiled bymore » capitalizing on the extensive technical basis that exists from prior work on the Waste Isolation Pilot Plant (WIPP), other U.S. repository development programs, and the work published through international efforts in salt repository programs such as in Germany. The potential benefits of developing a safety case include leveraging previous investments in WIPP to reduce future new repository costs, enhancing the ability to effectively plan for a repository and its licensing, and possibly expediting a schedule for a repository. A safety case will provide the necessary structure for organizing and synthesizing existing salt repository science and identifying any issues and gaps pertaining to safe disposal of DOE HLW and DOE SNF in bedded salt. The safety case synthesis will help DOE to plan its future R and D activities for investigating salt disposal using a risk-informed approach that prioritizes test activities that include laboratory, field, and underground investigations. It should be emphasized that the DOE has not made any decisions regarding the disposition of DOE HLW and DOE SNF. Furthermore, the safety case discussed herein is not intended to either site a repository in the Delaware Basin or preclude siting in other media at other locations. Rather, this study simply presents an approach for accelerated development of a safety case for a potential DOE HLW and DOE SNF repository using the currently available technical basis for bedded salt. This approach includes a summary of the regulatory environment relevant to disposal of DOE HLW and DOE SNF in a deep geologic repository, the key elements of a safety case, the evolution of the safety case through the successive phases of repository development and licensing, and the existing technical basis that could be used to substantiate the safety of a geologic repository if it were to be sited in the Delaware Basin. We also discuss the potential role of an underground research laboratory (URL). (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-04-01
During the second half of fiscal year 1996, activities at the Yucca Mountain Site Characterization Project (Project) supported the objectives of the revised Program Plan released this period by the Office of Civilian Radioactive Waste Management of the US Department of Energy (Department). Outlined in the revised plan is a focused, integrated program of site characterization, design, engineering, environmental, and performance assessment activities that will achieve key Program and statutory objectives. The plan will result in the development of a license application for repository construction at Yucca Mountain, if the site is found suitable. Activities this period focused on twomore » of the three near-term objectives of the revised plan: updating in 1997 the regulatory framework for determining the suitability of the site for the proposed repository concept and providing information for a 1998 viability assessment of continuing toward the licensing of a repository. The Project has also developed a new design approach that uses the advanced conceptual design published during the last reporting period as a base for developing a design that will support the viability assessment. The initial construction phase of the Thermal Testing Facility was completed and the first phase of the in situ heater tests began on schedule. In addition, phase-one construction was completed for the first of two alcoves that will provide access to the Ghost Dance fault.« less
Automated Student Model Improvement
ERIC Educational Resources Information Center
Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.
2012-01-01
Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsha Keister; Kathryn McBride
The Nuclear Waste Policy Act of 1982 (NWPA), as amended, assigned the Department of Energy (DOE) responsibility for developing and managing a Federal system for the disposal of spent nuclear fuel (SNF) and high-level radioactive waste (HLW). The Office of Civilian Radioactive Waste Management (OCRWM) is responsible for accepting, transporting, and disposing of SNF and HLW at the Yucca Mountain repository in a manner that protects public health, safety, and the environment; enhances national and energy security; and merits public confidence. OCRWM faces a near-term challenge—to develop and demonstrate a transportation system that will sustain safe and efficient shipments ofmore » SNF and HLW to a repository. To better inform and improve its current planning, OCRWM has extensively reviewed plans and other documents related to past high-visibility shipping campaigns of SNF and other radioactive materials within the United States. This report summarizes the results of this review and, where appropriate, lessons learned.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landais, Patrick; Leclerc, Elisabeth; Mariotti, Andre
Obtaining a reference state of the environment before the beginning of construction work for a geological repository is essential as it will be useful for further monitoring during operations and beyond, thus keeping a memory of the original environmental state. The area and the compartments of the biosphere to be observed and monitored as well as the choice of the markers (e.g. bio-markers, biodiversity, quality of the environment, etc.) to be followed must be carefully selected. In parallel, the choice and selection of the environmental monitoring systems (i.e. scientific and technical criteria, social requirements) will be of paramount importance formore » the evaluation of the perturbations that could be induced during the operational phase of the repository exploitation. This paper presents learning points of the French environment observatory located in the Meuse/Haute-Marne that has been selected for studying the feasibility of the underground disposal of high level wastes in France. (authors)« less
Dentistry students' perceptions of learning management systems.
Handal, B; Groenlund, C; Gerzina, T
2010-02-01
This paper reports an exploratory survey study about students' perceptions of learning management systems (LMS) at the Faculty of Dentistry, University of Sydney. Two hundred and fifty-four students enrolled in the Bachelor of Dentistry and the Bachelor of Oral Health programmes participated in an online survey aimed at exploring their beliefs and attitudes as well as their preferences for eLearning tools. Results indicated a strong preference of students for using LMSs as resource repositories rather than for higher-order learning activities such as online discussion forums. This finding holds importance for consideration of the development of the educational resources modalities that support development of essential graduate attributes such as information literacy and collaborative learning.
Topic maps for exploring nosological, lexical, semantic and HL7 structures for clinical data.
Paterson, Grace I; Grant, Andrew M; Soroka, Steven D
2008-12-01
A topic map is implemented for learning about clinical data associated with a hospital stay for patients diagnosed with chronic kidney disease, diabetes and hypertension. The question posed is: how might a topic map help bridge perspectival differences among communities of practice and help make commensurable the different classifications they use? The knowledge layer of the topic map was generated from existing ontological relationships in nosological, lexical, semantic and HL7 boundary objects. Discharge summaries, patient charts and clinical data warehouse entries rectified the clinical knowledge used in practice. These clinical data were normalized to HL7 Clinical Document Architecture (CDA) markup standard and stored in the Clinical Document Repository. Each CDA entry was given a subject identifier and linked with the topic map. The ability of topic maps to function as the infostructure ;glue' is assessed using dimensions of semantic interoperability and commensurability.
Shamszaman, Zia Ush; Ara, Safina Showkat; Chong, Ilyoung; Jeong, Youn Kwae
2014-01-01
Recent advancements in the Internet of Things (IoT) and the Web of Things (WoT) accompany a smart life where real world objects, including sensing devices, are interconnected with each other. The Web representation of smart objects empowers innovative applications and services for various domains. To accelerate this approach, Web of Objects (WoO) focuses on the implementation aspects of bringing the assorted real world objects to the Web applications. In this paper; we propose an emergency fire management system in the WoO infrastructure. Consequently, we integrate the formation and management of Virtual Objects (ViO) which are derived from real world physical objects and are virtually connected with each other into the semantic ontology model. The charm of using the semantic ontology is that it allows information reusability, extensibility and interoperability, which enable ViOs to uphold orchestration, federation, collaboration and harmonization. Our system is context aware, as it receives contextual environmental information from distributed sensors and detects emergency situations. To handle a fire emergency, we present a decision support tool for the emergency fire management team. The previous fire incident log is the basis of the decision support system. A log repository collects all the emergency fire incident logs from ViOs and stores them in a repository. PMID:24531299
Shamszaman, Zia Ush; Ara, Safina Showkat; Chong, Ilyoung; Jeong, Youn Kwae
2014-02-13
Recent advancements in the Internet of Things (IoT) and the Web of Things (WoT) accompany a smart life where real world objects, including sensing devices, are interconnected with each other. The Web representation of smart objects empowers innovative applications and services for various domains. To accelerate this approach, Web of Objects (WoO) focuses on the implementation aspects of bringing the assorted real world objects to the Web applications. In this paper; we propose an emergency fire management system in the WoO infrastructure. Consequently, we integrate the formation and management of Virtual Objects (ViO) which are derived from real world physical objects and are virtually connected with each other into the semantic ontology model. The charm of using the semantic ontology is that it allows information reusability, extensibility and interoperability, which enable ViOs to uphold orchestration, federation, collaboration and harmonization. Our system is context aware, as it receives contextual environmental information from distributed sensors and detects emergency situations. To handle a fire emergency, we present a decision support tool for the emergency fire management team. The previous fire incident log is the basis of the decision support system. A log repository collects all the emergency fire incident logs from ViOs and stores them in a repository.
Lessons learned from DNA-based tool development and use in a genebank
USDA-ARS?s Scientific Manuscript database
In 2002, a molecular genetics laboratory was established at the United States Department of Agriculture Agricultural Research Service (USDA-ARS), National Clonal Germplasm Repository (NCGR), in Corvallis, Oregon. This facility houses the US national genebank for strawberry (Fragaria L.). A main obje...
A Digital Library for Education: The PEN-DOR Project.
ERIC Educational Resources Information Center
Fullerton, Karen; Greenberg, Jane; McClure, Maureen; Rasmussen, Edie; Stewart, Darin
1999-01-01
Describes Pen-DOR (Pennsylvania Education Network Digital Object Repository), a digital library designed to provide K-12 educators with access to multimedia resources and tools to create new lesson plans and modify existing ones via the World Wide Web. Discusses design problems of a distributed, object-oriented database architecture and describes…
What Four Million Mappings Can Tell You about Two Hundred Ontologies
NASA Astrophysics Data System (ADS)
Ghazvinian, Amir; Noy, Natalya F.; Jonquet, Clement; Shah, Nigam; Musen, Mark A.
The field of biomedicine has embraced the Semantic Web probably more than any other field. As a result, there is a large number of biomedical ontologies covering overlapping areas of the field. We have developed BioPortal—an open community-based repository of biomedical ontologies. We analyzed ontologies and terminologies in BioPortal and the Unified Medical Language System (UMLS), creating more than 4 million mappings between concepts in these ontologies and terminologies based on the lexical similarity of concept names and synonyms. We then analyzed the mappings and what they tell us about the ontologies themselves, the structure of the ontology repository, and the ways in which the mappings can help in the process of ontology design and evaluation. For example, we can use the mappings to guide users who are new to a field to the most pertinent ontologies in that field, to identify areas of the domain that are not covered sufficiently by the ontologies in the repository, and to identify which ontologies will serve well as background knowledge in domain-specific tools. While we used a specific (but large) ontology repository for the study, we believe that the lessons we learned about the value of a large-scale set of mappings to ontology users and developers are general and apply in many other domains.
OntoVIP: an ontology for the annotation of object models used for medical image simulation.
Gibaud, Bernard; Forestier, Germain; Benoit-Cattin, Hugues; Cervenansky, Frédéric; Clarysse, Patrick; Friboulet, Denis; Gaignard, Alban; Hugonnard, Patrick; Lartizien, Carole; Liebgott, Hervé; Montagnat, Johan; Tabary, Joachim; Glatard, Tristan
2014-12-01
This paper describes the creation of a comprehensive conceptualization of object models used in medical image simulation, suitable for major imaging modalities and simulators. The goal is to create an application ontology that can be used to annotate the models in a repository integrated in the Virtual Imaging Platform (VIP), to facilitate their sharing and reuse. Annotations make the anatomical, physiological and pathophysiological content of the object models explicit. In such an interdisciplinary context we chose to rely on a common integration framework provided by a foundational ontology, that facilitates the consistent integration of the various modules extracted from several existing ontologies, i.e. FMA, PATO, MPATH, RadLex and ChEBI. Emphasis is put on methodology for achieving this extraction and integration. The most salient aspects of the ontology are presented, especially the organization in model layers, as well as its use to browse and query the model repository. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Prodanovic, M.; Esteva, M.; Ketcham, R. A.; Hanlon, M.; Pettengill, M.; Ranganath, A.; Venkatesh, A.
2016-12-01
Due to advances in imaging modalities such as X-ray microtomography and scattered electron microscopy, 2D and 3D imaged datasets of rock microstructure on nanometer to centimeter length scale allow investigation of nonlinear flow and mechanical phenomena using numerical approaches. This in turn produces various upscaled parameters required by subsurface flow and deformation simulators. However, a single research group typically specializes in an imaging modality and/or related modeling on a single length scale, and lack of data-sharing infrastructure makes it difficult to integrate different length scales. We developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal (http://www.digitalrocksportal.org), that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of geosciences or engineering researchers not necessarily trained in computer science or data analysis. Our objective is to enable scientific inquiry and engineering decisions founded on a data-driven basis. We show how the data loaded in the portal can be documented, referenced in publications via digital object identifiers, visualize and linked to other repositories. We then show preliminary results on integrating remote parallel visualization and flow simulation workflow with the pore structures currently stored in the repository. We finally discuss the issues of collecting correct metadata, data discoverability and repository sustainability. This is the first repository for this particular data, but is part of the wider ecosystem of geoscience data and model cyber-infrastructure called "Earthcube" (http://earthcube.org/) sponsored by National Science Foundation. For data sustainability and continuous access, the portal is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative.
Learning from the scientific legacies of W. Brutsaert and J.-Y. Parlange
USDA-ARS?s Scientific Manuscript database
Though the essence of the scientific literature is to be a repository of unaffiliated truths, scientific advancement fundamentally stems from the insights and efforts of individuals. This dichotomy can hide exemplars for young scholars of how to contribute to scientific understanding. This section o...
The Process of Designing for Learning: Understanding University Teachers' Design Work
ERIC Educational Resources Information Center
Bennett, Sue; Agostinho, Shirley; Lockyer, Lori
2017-01-01
Interest in how to support the design work of university teachers has led to research and development initiatives that include technology-based design-support tools, online repositories, and technical specifications. Despite these initiatives, remarkably little is known about the design work that university teachers actually do. This paper…
NASA Astrophysics Data System (ADS)
Campbell, J. D.; Heilman, P.; Goodrich, D. C.; Sadler, J.
2015-12-01
The objective for the USDA Long-Term Agroecosystem Research (LTAR) network Common Observatory Repository (CORe) is to provide data management services including archive, discovery, and access for consistently observed data across all 18 nodes. LTAR members have an average of 56 years of diverse historic data. Each LTAR has designated a representative 'permanent' site as the location's common meteorological observatory. CORe implementation is phased, starting with meteorology, then adding hydrology, eddy flux, soil, and biology data. A design goal was to adopt existing best practices while minimizing the additional data management duties for the researchers. LTAR is providing support for data management specialists at the locations and the National Agricultural Library is providing central data management services. Maintaining continuity with historical observations is essential, so observations from both the legacy and new common methods are included in CORe. International standards are used to store robust descriptive metadata (ISO 19115) for the observation station and surrounding locale (WMO), sensors (Sensor ML), and activity (e.g., re-calibration, locale changes) to provide sufficient detail for novel data re-use for the next 50 years. To facilitate data submission a simple text format was designed. Datasets in CORe will receive DOIs to encourage citations giving fair credit for data providers. Data and metadata access are designed to support multiple formats and naming conventions. An automated QC process is being developed to enhance comparability among LTAR locations and to generate QC process metadata. Data provenance is maintained with a permanent record of changes including those by local scientists reviewing the automated QC results. Lessons learned so far include increase in site acceptance of CORe with the decision to store data from both legacy and new common methods. A larger than anticipated variety of currently used methods with potentially significant differences for future data use was found. Cooperative peer support among locations with the same sensors coupled with central support has reduced redundancy in procedural and data documentation.
A Virtual Rock Physics Laboratory Through Visualized and Interactive Experiments
NASA Astrophysics Data System (ADS)
Vanorio, T.; Di Bonito, C.; Clark, A. C.
2014-12-01
As new scientific challenges demand more comprehensive and multidisciplinary investigations, laboratory experiments are not expected to become simpler and/or faster. Experimental investigation is an indispensable element of scientific inquiry and must play a central role in the way current and future generations of scientist make decisions. To turn the complexity of laboratory work (and that of rocks!) into dexterity, engagement, and expanded learning opportunities, we are building an interactive, virtual laboratory reproducing in form and function the Stanford Rock Physics Laboratory, at Stanford University. The objective is to combine lectures on laboratory techniques and an online repository of visualized experiments consisting of interactive, 3-D renderings of equipment used to measure properties central to the study of rock physics (e.g., how to saturate rocks, how to measure porosity, permeability, and elastic wave velocity). We use a game creation system together with 3-D computer graphics, and a narrative voice to guide the user through the different phases of the experimental protocol. The main advantage gained in employing computer graphics over video footage is that students can virtually open the instrument, single out its components, and assemble it. Most importantly, it helps describe the processes occurring within the rock. These latter cannot be tracked while simply recording the physical experiment, but computer animation can efficiently illustrate what happens inside rock samples (e.g., describing acoustic waves, and/or fluid flow through a porous rock under pressure within an opaque core-holder - Figure 1). The repository of visualized experiments will complement lectures on laboratory techniques and constitute an on-line course offered through the EdX platform at Stanford. This will provide a virtual laboratory for anyone, anywhere to facilitate teaching/learning of introductory laboratory classes in Geophysics and expand the number of courses that can be offered for curricula in Earth Sciences. The primary goal is to open up a research laboratory such as the one available at Stanford to promising students worldwide who are currently left out of such educational resources.
Differential Diagnosis of Erythmato-Squamous Diseases Using Classification and Regression Tree
Maghooli, Keivan; Langarizadeh, Mostafa; Shahmoradi, Leila; Habibi-koolaee, Mahdi; Jebraeily, Mohamad; Bouraghi, Hamid
2016-01-01
Introduction: Differential diagnosis of Erythmato-Squamous Diseases (ESD) is a major challenge in the field of dermatology. The ESD diseases are placed into six different classes. Data mining is the process for detection of hidden patterns. In the case of ESD, data mining help us to predict the diseases. Different algorithms were developed for this purpose. Objective: we aimed to use the Classification and Regression Tree (CART) to predict differential diagnosis of ESD. Methods: we used the Cross Industry Standard Process for Data Mining (CRISP-DM) methodology. For this purpose, the dermatology data set from machine learning repository, UCI was obtained. The Clementine 12.0 software from IBM Company was used for modelling. In order to evaluation of the model we calculate the accuracy, sensitivity and specificity of the model. Results: The proposed model had an accuracy of 94.84% ( Standard Deviation: 24.42) in order to correct prediction of the ESD disease. Conclusions: Results indicated that using of this classifier could be useful. But, it would be strongly recommended that the combination of machine learning methods could be more useful in terms of prediction of ESD. PMID:28077889
NASA Astrophysics Data System (ADS)
Klump, J. F.; Ulbricht, D.; Conze, R.
2014-12-01
The Continental Deep Drilling Programme (KTB) was a scientific drilling project from 1987 to 1995 near Windischeschenbach, Bavaria. The main super-deep borehole reached a depth of 9,101 meters into the Earth's continental crust. The project used the most current equipment for data capture and processing. After the end of the project key data were disseminated through the web portal of the International Continental Scientific Drilling Program (ICDP). The scientific reports were published as printed volumes. As similar projects have also experienced, it becomes increasingly difficult to maintain a data portal over a long time. Changes in software and underlying hardware make a migration of the entire system inevitable. Around 2009 the data presented on the ICDP web portal were migrated to the Scientific Drilling Database (SDDB) and published through DataCite using Digital Object Identifiers (DOI) as persistent identifiers. The SDDB portal used a relational database with a complex data model to store data and metadata. A PHP-based Content Management System with custom modifications made it possible to navigate and browse datasets using the metadata and then download datasets. The data repository software eSciDoc allows storing self-contained packages consistent with the OAIS reference model. Each package consists of binary data files and XML-metadata. Using a REST-API the packages can be stored in the eSciDoc repository and can be searched using the XML-metadata. During the last maintenance cycle of the SDDB the data and metadata were migrated into the eSciDoc repository. Discovery metadata was generated following the GCMD-DIF, ISO19115 and DataCite schemas. The eSciDoc repository allows to store an arbitrary number of XML-metadata records with each data object. In addition to descriptive metadata each data object may contain pointers to related materials, such as IGSN-metadata to link datasets to physical specimens, or identifiers of literature interpreting the data. Datasets are presented by XSLT-stylesheet transformation using the stored metadata. The presentation shows several migration cycles of data and metadata, which were driven by aging software systems. Currently the datasets reside as self-contained entities in a repository system that is ready for digital preservation.
Polepalli Ramesh, Balaji; Belknap, Steven M; Li, Zuofeng; Frid, Nadya; West, Dennis P
2014-01-01
Background The Food and Drug Administration’s (FDA) Adverse Event Reporting System (FAERS) is a repository of spontaneously-reported adverse drug events (ADEs) for FDA-approved prescription drugs. FAERS reports include both structured reports and unstructured narratives. The narratives often include essential information for evaluation of the severity, causality, and description of ADEs that are not present in the structured data. The timely identification of unknown toxicities of prescription drugs is an important, unsolved problem. Objective The objective of this study was to develop an annotated corpus of FAERS narratives and biomedical named entity tagger to automatically identify ADE related information in the FAERS narratives. Methods We developed an annotation guideline and annotate medication information and adverse event related entities on 122 FAERS narratives comprising approximately 23,000 word tokens. A named entity tagger using supervised machine learning approaches was built for detecting medication information and adverse event entities using various categories of features. Results The annotated corpus had an agreement of over .9 Cohen’s kappa for medication and adverse event entities. The best performing tagger achieves an overall performance of 0.73 F1 score for detection of medication, adverse event and other named entities. Conclusions In this study, we developed an annotated corpus of FAERS narratives and machine learning based models for automatically extracting medication and adverse event information from the FAERS narratives. Our study is an important step towards enriching the FAERS data for postmarketing pharmacovigilance. PMID:25600332
Sharma, Deepak K; Solbrig, Harold R; Tao, Cui; Weng, Chunhua; Chute, Christopher G; Jiang, Guoqian
2017-06-05
Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.
Diagnostic imaging learning resources evaluated by students and recent graduates.
Alexander, Kate; Bélisle, Marilou; Dallaire, Sébastien; Fernandez, Nicolas; Doucet, Michèle
2013-01-01
Many learning resources can help students develop the problem-solving abilities and clinical skills required for diagnostic imaging. This study explored veterinary students' perceptions of the usefulness of a variety of learning resources. Perceived resource usefulness was measured for different levels of students and for academic versus clinical preparation. Third-year (n=139) and final (fifth) year (n=105) students and recent graduates (n=56) completed questionnaires on perceived usefulness of each resource. Resources were grouped for comparison: abstract/low complexity (e.g., notes, multimedia presentations), abstract/high complexity (e.g., Web-based and film case repositories), concrete/low complexity (e.g., large-group "clicker" workshops), and concrete/high complexity (e.g., small-group interpretation workshops). Lower-level students considered abstract/low-complexity resources more useful for academic preparation and concrete resources more useful for clinical preparation. Higher-level students/recent graduates also considered abstract/low-complexity resources more useful for academic preparation. For all levels, lecture notes were considered highly useful. Multimedia slideshows were an interactive complement to notes. The usefulness of a Web-based case repository was limited by accessibility problems and difficulty. Traditional abstract/low-complexity resources were considered useful for more levels and contexts than expected. Concrete/high-complexity resources need to better represent clinical practice to be considered more useful for clinical preparation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barariu, Gheorghe
2007-07-01
The paper presents the new perspectives on the development of the L/ILW Final Repository Project which will be built near Cernavoda NPP. The Repository is designed to satisfy the main performance objectives in accordance to IAEA recommendation. Starting in October 1996, Romania became a country with an operating nuclear power plant. Reactor 2 reached the criticality on May 6, 2007 and it will be put in commercial operation in September 2007. The Ministry of Economy and Finance has decided to proceed with the commissioning of Units 3 and 4 of Cernavoda NPP till 2014. The Strategy for radioactive waste managementmore » was elaborated by National Agency for Radioactive Waste (ANDRAD), the jurisdictional authority for definitive disposal and the coordination of nuclear spent fuel and radioactive waste management (Order 844/2004) with attributions established by Governmental Decision (GO) 31/2006. The Strategy specifies the commissioning of the Saligny L/IL Radwaste Repository near Cernavoda NPP in 2014. When designing the L/IL Radwaste Repository, the following prerequisites have been taken into account: 1) Cernavoda NPP will be equipped with 4 Candu 6 units. 2) National Legislation in radwaste management will be reviewed and/or completed to harmonize with UE standards 3) The selected site is now in process of confirmation after a comprehensive set of interdisciplinary investigations. (author)« less
Semantic Indexing of Medical Learning Objects: Medical Students' Usage of a Semantic Network
Gießler, Paul; Ohnesorge-Radtke, Ursula; Spreckelsen, Cord
2015-01-01
Background The Semantically Annotated Media (SAM) project aims to provide a flexible platform for searching, browsing, and indexing medical learning objects (MLOs) based on a semantic network derived from established classification systems. Primarily, SAM supports the Aachen emedia skills lab, but SAM is ready for indexing distributed content and the Simple Knowledge Organizing System standard provides a means for easily upgrading or even exchanging SAM’s semantic network. There is a lack of research addressing the usability of MLO indexes or search portals like SAM and the user behavior with such platforms. Objective The purpose of this study was to assess the usability of SAM by investigating characteristic user behavior of medical students accessing MLOs via SAM. Methods In this study, we chose a mixed-methods approach. Lean usability testing was combined with usability inspection by having the participants complete four typical usage scenarios before filling out a questionnaire. The questionnaire was based on the IsoMetrics usability inventory. Direct user interaction with SAM (mouse clicks and pages accessed) was logged. Results The study analyzed the typical usage patterns and habits of students using a semantic network for accessing MLOs. Four scenarios capturing characteristics of typical tasks to be solved by using SAM yielded high ratings of usability items and showed good results concerning the consistency of indexing by different users. Long-tail phenomena emerge as they are typical for a collaborative Web 2.0 platform. Suitable but nonetheless rarely used keywords were assigned to MLOs by some users. Conclusions It is possible to develop a Web-based tool with high usability and acceptance for indexing and retrieval of MLOs. SAM can be applied to indexing multicentered repositories of MLOs collaboratively. PMID:27731860
mHealthApps: A Repository and Database of Mobile Health Apps.
Xu, Wenlong; Liu, Yin
2015-03-18
The market of mobile health (mHealth) apps has rapidly evolved in the past decade. With more than 100,000 mHealth apps currently available, there is no centralized resource that collects information on these health-related apps for researchers in this field to effectively evaluate the strength and weakness of these apps. The objective of this study was to create a centralized mHealth app repository. We expect the analysis of information in this repository to provide insights for future mHealth research developments. We focused on apps from the two most established app stores, the Apple App Store and the Google Play Store. We extracted detailed information of each health-related app from these two app stores via our python crawling program, and then stored the information in both a user-friendly array format and a standard JavaScript Object Notation (JSON) format. We have developed a centralized resource that provides detailed information of more than 60,000 health-related apps from the Apple App Store and the Google Play Store. Using this information resource, we analyzed thousands of apps systematically and provide an overview of the trends for mHealth apps. This unique database allows the meta-analysis of health-related apps and provides guidance for research designs of future apps in the mHealth field.
LingoBee: Engaging Mobile Language Learners through Crowd-Sourcing
ERIC Educational Resources Information Center
Petersen, Sobah Abbas; Procter-Legg, Emma; Cacchione, Annamaria
2014-01-01
This paper describes three case studies, where language learners were invited to use "LingoBee" as a means of supporting their language learning. LingoBee is a mobile app that provides user-generated language content in a cloud-based shared repository. Assuming that today's students are mobile savvy and "Digital Natives" able…
ASK-LDT 2.0: A Web-Based Graphical Tool for Authoring Learning Designs
ERIC Educational Resources Information Center
Zervas, Panagiotis; Fragkos, Konstantinos; Sampson, Demetrios G.
2013-01-01
During the last decade, Open Educational Resources (OERs) have gained increased attention for their potential to support open access, sharing and reuse of digital educational resources. Therefore, a large amount of digital educational resources have become available worldwide through web-based open access repositories which are referred to as…
Online Concept Maps: Enhancing Collaborative Learning by Using Technology with Concept Maps.
ERIC Educational Resources Information Center
Canas, Alberto J.; Ford, Kenneth M.; Novak, Joseph D.; Hayes, Patrick; Reichherzer, Thomas R.; Suri, Niranjan
2001-01-01
Describes a collaborative software system that allows students from distant schools to share claims derived from their concept maps. Sharing takes place by accessing The Knowledge Soup, a repository of propositions submitted by students and stored on a computer server. Students can use propositions from other students to enhance their concept…
Making It Work: Creating a Student-Friendly Repository of Instructional Videos
ERIC Educational Resources Information Center
Keba, Michelle; Segno, Jamie; Schofield, Michael
2015-01-01
This case study investigates how a team of librarians at Nova Southeastern University (NSU) worked together to assess and optimize their library's current instructional videos in order to create a mobile-first video hosting platform, known as LibraryLearn. Instructional library videos serve as invaluable resources for students who are not present…
ERIC Educational Resources Information Center
Park, Sanghoon; McLeod, Kenneth
2018-01-01
Open Educational Resources (OER) can offer educators the necessary flexibility for tailoring educational resources to better fit their educational goals. Although the number of OER repositories is growing fast, few studies have been conducted to empirically test the effectiveness of OER integration in the classroom. Furthermore, very little is…
ERIC Educational Resources Information Center
Corlett, Bradly
2014-01-01
Several recent issues and trends in online education have resulted in consolidation of efforts for Massive Open Online Courses (MOOCs), increased Open Educational Resources (OER) in the form of asynchronous course repositories, with noticeable increases in governance and policy amplification. These emerging enrollment trends in alternative online…
Generic Argillite/Shale Disposal Reference Case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Liange; Colon, Carlos Jové; Bianchi, Marco
Radioactive waste disposal in a deep subsurface repository hosted in clay/shale/argillite is a subject of widespread interest given the desirable isolation properties, geochemically reduced conditions, and widespread geologic occurrence of this rock type (Hansen 2010; Bianchi et al. 2013). Bianchi et al. (2013) provides a description of diffusion in a clay-hosted repository based on single-phase flow and full saturation using parametric data from documented studies in Europe (e.g., ANDRA 2005). The predominance of diffusive transport and sorption phenomena in this clay media are key attributes to impede radionuclide mobility making clay rock formations target sites for disposal of high-level radioactivemore » waste. The reports by Hansen et al. (2010) and those from numerous studies in clay-hosted underground research laboratories (URLs) in Belgium, France and Switzerland outline the extensive scientific knowledge obtained to assess long-term clay/shale/argillite repository isolation performance of nuclear waste. In the past several years under the UFDC, various kinds of models have been developed for argillite repository to demonstrate the model capability, understand the spatial and temporal alteration of the repository, and evaluate different scenarios. These models include the coupled Thermal-Hydrological-Mechanical (THM) and Thermal-Hydrological-Mechanical-Chemical (THMC) models (e.g. Liu et al. 2013; Rutqvist et al. 2014a, Zheng et al. 2014a) that focus on THMC processes in the Engineered Barrier System (EBS) bentonite and argillite host hock, the large scale hydrogeologic model (Bianchi et al. 2014) that investigates the hydraulic connection between an emplacement drift and surrounding hydrogeological units, and Disposal Systems Evaluation Framework (DSEF) models (Greenberg et al. 2013) that evaluate thermal evolution in the host rock approximated as a thermal conduction process to facilitate the analysis of design options. However, the assumptions and the properties (parameters) used in these models are different, which not only make inter-model comparisons difficult, but also compromise the applicability of the lessons learned from one model to another model. The establishment of a reference case would therefore be helpful to set up a baseline for model development. A generic salt repository reference case was developed in Freeze et al. (2013) and the generic argillite repository reference case is presented in this report. The definition of a reference case requires the characterization of the waste inventory, waste form, waste package, repository layout, EBS backfill, host rock, and biosphere. This report mainly documents the processes in EBS bentonite and host rock that are potentially important for performance assessment and properties that are needed to describe these processes, with brief description other components such as waste inventory, waste form, waste package, repository layout, aquifer, and biosphere. A thorough description of the generic argillite repository reference case will be given in Jové Colon et al. (2014).« less
Cross-Cutting Risk Framework: Mining Data for Common Risks Across the Portfolio
NASA Technical Reports Server (NTRS)
Klein, Gerald A., Jr.; Ruark, Valerie
2017-01-01
The National Aeronautics and Space Administration (NASA) defines risk management as an integrated framework, combining risk-informed decision making and continuous risk management to foster forward-thinking and decision making from an integrated risk perspective. Therefore, decision makers must have access to risks outside of their own project to gain the knowledge that provides the integrated risk perspective. Through the Goddard Space Flight Center (GSFC) Flight Projects Directorate (FPD) Business Change Initiative (BCI), risks were integrated into one repository to facilitate access to risk data between projects. With the centralized repository, communications between the FPD, project managers, and risk managers improved and GSFC created the cross-cutting risk framework (CCRF) team. The creation of the consolidated risk repository, in parallel with the initiation of monthly FPD risk managers and risk governance board meetings, are now providing a complete risk management picture spanning the entire directorate. This paper will describe the challenges, methodologies, tools, and techniques used to develop the CCRF, and the lessons learned as the team collectively worked to identify risks that FPD programs projects had in common, both past and present.
Huang, Haiyan; Liu, Chun-Chi; Zhou, Xianghong Jasmine
2010-04-13
The rapid accumulation of gene expression data has offered unprecedented opportunities to study human diseases. The National Center for Biotechnology Information Gene Expression Omnibus is currently the largest database that systematically documents the genome-wide molecular basis of diseases. However, thus far, this resource has been far from fully utilized. This paper describes the first study to transform public gene expression repositories into an automated disease diagnosis database. Particularly, we have developed a systematic framework, including a two-stage Bayesian learning approach, to achieve the diagnosis of one or multiple diseases for a query expression profile along a hierarchical disease taxonomy. Our approach, including standardizing cross-platform gene expression data and heterogeneous disease annotations, allows analyzing both sources of information in a unified probabilistic system. A high level of overall diagnostic accuracy was shown by cross validation. It was also demonstrated that the power of our method can increase significantly with the continued growth of public gene expression repositories. Finally, we showed how our disease diagnosis system can be used to characterize complex phenotypes and to construct a disease-drug connectivity map.
Mont Terri Underground Rock Laboratory, Switzerland-Research Program And Key Results
NASA Astrophysics Data System (ADS)
Nussbaum, C. O.; Bossart, P. J.
2012-12-01
Argillaceous formations generally act as aquitards because of their low hydraulic conductivities. This property, together with the large retention capacity of clays for cationic contaminants and the potential for self-sealing, has brought clay formations into focus as potential host rocks for the geological disposal of radioactive waste. Excavated in the Opalinus Clay formation, the Mont Terri underground rock laboratory in the Jura Mountains of NW Switzerland is an important international test site for researching clay formations. Research is carried out in the underground facility, which is located adjacent to the security gallery of the Mont Terri motorway tunnel. Fifteen partners from European countries, USA, Canada and Japan participate in the project. The objectives of the research program are to analyze the hydrogeological, geochemical and rock mechanical properties of the Opalinus Clay, to determine the changes induced by the excavation of galleries and by heating of the rock formation, to test sealing and container emplacement techniques and to evaluate and improve suitable investigation techniques. For the safety of deep geological disposal, it is of key importance to understand the processes occurring in the undisturbed argillaceous environment, as well as the processes in a disturbed system, during the operation of the repository. The objectives are related to: 1. Understanding processes and mechanisms in undisturbed clays and 2. Experiments related to repository-induced perturbations. Experiments of the first group are dedicated to: i) Improvement of drilling and excavation technologies and sampling methods; ii) Estimation of hydrogeological, rock mechanical and geochemical parameters of the undisturbed Opalinus Clay. Upscaling of parameters from laboratory to in situ scale; iii) Geochemistry of porewater and natural gases; evolution of porewater over time scales; iv) Assessment of long-term hydraulic transients associated with erosion and thermal scenarios and v) Evaluation of diffusion and retention parameters for long-lived radionuclides. Experiments related to repository-induced perturbations are focused on: i) Influence of rock liner on the disposal system and the buffering potential of the host rock; ii) Self-sealing processes in the excavation damaged zone; iii) Hydro-mechanical coupled processes (e.g. stress redistributions and pore pressure evolution during excavation); iv) Thermo-hydro-mechanical-chemical coupled processes (e.g. heating of bentonite and host rock) and v) Gas-induced transport of radionuclides in porewater and along interfaces in the engineered barrier system. A third research direction is to demonstrate the feasibility of repository construction and long-term safety after repository closure. Demonstration experiments can contribute to improving the reliability of the scientific basis for the safety assessment of future geological repositories, particularly if they are performed on a large scale and with a long duration. These experiments include the construction and installation of engineered barriers on a 1:1 scale: i) Horizontal emplacement of canisters; ii) Evaluation of the corrosion of container materials; repository re-saturation; iii) Sealing of boreholes and repository access tunnels and iv) Long-term monitoring of the repository. References Bossart, P. & Thury, M. (2008): Mont Terri Rock Laboratory. Project, Programme 1996 to 2007 and Results. - Rep. Swiss Geol. Surv. 3.
A Note on Interfacing Object Warehouses and Mass Storage Systems for Data Mining Applications
NASA Technical Reports Server (NTRS)
Grossman, Robert L.; Northcutt, Dave
1996-01-01
Data mining is the automatic discovery of patterns, associations, and anomalies in data sets. Data mining requires numerically and statistically intensive queries. Our assumption is that data mining requires a specialized data management infrastructure to support the aforementioned intensive queries, but because of the sizes of data involved, this infrastructure is layered over a hierarchical storage system. In this paper, we discuss the architecture of a system which is layered for modularity, but exploits specialized lightweight services to maintain efficiency. Rather than use a full functioned database for example, we use light weight object services specialized for data mining. We propose using information repositories between layers so that components on either side of the layer can access information in the repositories to assist in making decisions about data layout, the caching and migration of data, the scheduling of queries, and related matters.
Building a genome database using an object-oriented approach.
Barbasiewicz, Anna; Liu, Lin; Lang, B Franz; Burger, Gertraud
2002-01-01
GOBASE is a relational database that integrates data associated with mitochondria and chloroplasts. The most important data in GOBASE, i. e., molecular sequences and taxonomic information, are obtained from the public sequence data repository at the National Center for Biotechnology Information (NCBI), and are validated by our experts. Maintaining a curated genomic database comes with a towering labor cost, due to the shear volume of available genomic sequences and the plethora of annotation errors and omissions in records retrieved from public repositories. Here we describe our approach to increase automation of the database population process, thereby reducing manual intervention. As a first step, we used Unified Modeling Language (UML) to construct a list of potential errors. Each case was evaluated independently, and an expert solution was devised, and represented as a diagram. Subsequently, the UML diagrams were used as templates for writing object-oriented automation programs in the Java programming language.
Probalistic Criticality Consequence Evaluation (SCPB:N/A)
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. Gottlieb; J.W. Davis; J.R. Massari
1996-09-04
This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development (WPD) department with the objective of providing a comprehensive, conservative estimate of the consequences of the criticality which could possibly occur as the result of commercial spent nuclear fuel emplaced in the underground repository at Yucca Mountain. The consequences of criticality are measured principally in terms of the resulting changes in radionuclide inventory as a function of the power level and duration of the criticality. The purpose of this analysis is to extend the prior estimates of increased radionuclide inventory (Refs. 5.52 and 5.54), for bothmore » internal and external criticality. This analysis, and similar estimates and refinements to be completed before the end of fiscal year 1997, will be provided as input to Total System Performance Assessment-Viability Assessment (TSPA-VA) to demonstrate compliance with the repository performance objectives.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leigh, Christi D.; Hansen, Francis D.
This report summarizes the state of salt repository science, reviews many of the technical issues pertaining to disposal of heat-generating nuclear waste in salt, and proposes several avenues for future science-based activities to further the technical basis for disposal in salt. There are extensive salt formations in the forty-eight contiguous states, and many of them may be worthy of consideration for nuclear waste disposal. The United States has extensive experience in salt repository sciences, including an operating facility for disposal of transuranic wastes. The scientific background for salt disposal including laboratory and field tests at ambient and elevated temperature, principlesmore » of salt behavior, potential for fracture damage and its mitigation, seal systems, chemical conditions, advanced modeling capabilities and near-future developments, performance assessment processes, and international collaboration are all discussed. The discussion of salt disposal issues is brought current, including a summary of recent international workshops dedicated to high-level waste disposal in salt. Lessons learned from Sandia National Laboratories' experience on the Waste Isolation Pilot Plant and the Yucca Mountain Project as well as related salt experience with the Strategic Petroleum Reserve are applied in this assessment. Disposal of heat-generating nuclear waste in a suitable salt formation is attractive because the material is essentially impermeable, self-sealing, and thermally conductive. Conditions are chemically beneficial, and a significant experience base exists in understanding this environment. Within the period of institutional control, overburden pressure will seal fractures and provide a repository setting that limits radionuclide movement. A salt repository could potentially achieve total containment, with no releases to the environment in undisturbed scenarios for as long as the region is geologically stable. Much of the experience gained from United States repository development, such as seal system design, coupled process simulation, and application of performance assessment methodology, helps define a clear strategy for a heat-generating nuclear waste repository in salt.« less
Bialecki, Brian; Park, James; Tilkin, Mike
2016-08-01
The intent of this project was to use object storage and its database, which has the ability to add custom extensible metadata to an imaging object being stored within the system, to harness the power of its search capabilities, and to close the technology gap that healthcare faces. This creates a non-disruptive tool that can be used natively by both legacy systems and the healthcare systems of today which leverage more advanced storage technologies. The base infrastructure can be populated alongside current workflows without any interruption to the delivery of services. In certain use cases, this technology can be seen as a true alternative to the VNA (Vendor Neutral Archive) systems implemented by healthcare today. The scalability, security, and ability to process complex objects makes this more than just storage for image data and a commodity to be consumed by PACS (Picture Archiving and Communication System) and workstations. Object storage is a smart technology that can be leveraged to create vendor independence, standards compliance, and a data repository that can be mined for truly relevant content by adding additional context to search capabilities. This functionality can lead to efficiencies in workflow and a wealth of minable data to improve outcomes into the future.
re3data.org - a global registry of research data repositories
NASA Astrophysics Data System (ADS)
Pampel, Heinz; Vierkant, Paul; Elger, Kirsten; Bertelmann, Roland; Witt, Michael; Schirmbacher, Peter; Rücknagel, Jessika; Kindling, Maxi; Scholze, Frank; Ulrich, Robert
2016-04-01
re3data.org - the registry of research data repositories lists over 1,400 research data repositories from all over the world making it the largest and most comprehensive online catalog of research data repositories on the web. The registry is a valuable tool for researchers, funding organizations, publishers and libraries. re3data.org provides detailed information about research data repositories, and its distinctive icons help researchers to easily identify relevant repositories for accessing and depositing data sets [1]. Funding agencies, like the European Commission [2] and research institutions like the University of Bielefeld [3] already recommend the use of re3data.org in their guidelines and policies. Several publishers and journals like Copernicus Publications, PeerJ, and Nature's Scientific Data recommend re3data.org in their editorial policies as a tool for the easy identification of appropriate data repositories to store research data. Project partners in re3data.org are the Library and Information Services department (LIS) of the GFZ German Research Centre for Geosciences, the Computer and Media Service at the Humboldt-Universität zu Berlin, the Purdue University Libraries and the KIT Library at the Karlsruhe Institute of Technology (KIT). After its fusion with the U.S. American DataBib in 2014, re3data.org continues as a service of DataCite from 2016 on. DataCite is the international organization for the registration of Digital Object Identifiers (DOI) for research data and aims to improve their citation. The poster describes the current status and the future plans of re3data.org. [1] Pampel H, et al. (2013) Making Research Data Repositories Visible: The re3data.org Registry. PLoS ONE 8(11): e78080. doi:10.1371/journal.pone.0078080. [2] European Commission (2015): Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020. Available: http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-oa-pilot-guide_en.pdf Accessed 11 January 2016. [3] Bielefeld University (2013): Resolution on Research Data Management. Available: http://data.uni-bielefeld.de/en/resolution Accessed 11 January 2016.
Logistics Lessons Learned in NASA Space Flight
NASA Technical Reports Server (NTRS)
Evans, William A.; DeWeck, Olivier; Laufer, Deanna; Shull, Sarah
2006-01-01
The Vision for Space Exploration sets out a number of goals, involving both strategic and tactical objectives. These include returning the Space Shuttle to flight, completing the International Space Station, and conducting human expeditions to the Moon by 2020. Each of these goals has profound logistics implications. In the consideration of these objectives,a need for a study on NASA logistics lessons learned was recognized. The study endeavors to identify both needs for space exploration and challenges in the development of past logistics architectures, as well as in the design of space systems. This study may also be appropriately applied as guidance in the development of an integrated logistics architecture for future human missions to the Moon and Mars. This report first summarizes current logistics practices for the Space Shuttle Program (SSP) and the International Space Station (ISS) and examines the practices of manifesting, stowage, inventory tracking, waste disposal, and return logistics. The key findings of this examination are that while the current practices do have many positive aspects, there are also several shortcomings. These shortcomings include a high-level of excess complexity, redundancy of information/lack of a common database, and a large human-in-the-loop component. Later sections of this report describe the methodology and results of our work to systematically gather logistics lessons learned from past and current human spaceflight programs as well as validating these lessons through a survey of the opinions of current space logisticians. To consider the perspectives on logistics lessons, we searched several sources within NASA, including organizations with direct and indirect connections with the system flow in mission planning. We utilized crew debriefs, the John Commonsense lessons repository for the JSC Mission Operations Directorate, and the Skylab Lessons Learned. Additionally, we searched the public version of the Lessons Learned Information System (LLIS) and verified that we received the same result using the internal version of LLIS for our logistics lesson searches. In conducting the research, information from multiple databases was consolidated into a single spreadsheet of 300 lessons learned. Keywords were applied for the purpose of sorting and evaluation. Once the lessons had been compiled, an analysis of the resulting data was performed, first sorting it by keyword, then finding duplication and root cause, and finally sorting by root cause. The data was then distilled into the top 7 lessons learned across programs, centers, and activities.
Geohydrologic aspects for siting and design of low-level radioactive-waste disposal
Bedinger, M.S.
1989-01-01
The objective for siting and design of low-level radioactive-waste repository sites is to isolate the waste from the biosphere until the waste no longer poses an unacceptable hazard as a result of radioactive decay. Low-level radioactive waste commonly is isolated at shallow depths with various engineered features to stabilize the waste and to reduce its dissolution and transport by ground water. The unsaturated zone generally is preferred for isolating the waste. Low-level radioactive waste may need to be isolated for 300 to 500 years. Maintenance and monitoring of the repository site are required by Federal regulations for only the first 100 years. Therefore, geohydrology of the repository site needs to provide natural isolation of the waste for the hazardous period following maintenance of the site. Engineering design of the repository needs to be compatible with the natural geohydrologic conditions at the site. Studies at existing commercial and Federal waste-disposal sites provide information on the problems encountered and the basis for establishing siting guidelines for improved isolation of radioactive waste, engineering design of repository structures, and surveillance needs to assess the effectiveness of the repositories and to provide early warning of problems that may require remedial action.Climate directly affects the hydrology of a site and probably is the most important single factor that affects the suitability of a site for shallow-land burial of low-level radioactive waste. Humid and subhumid regions are not well suited for shallow isolation of low-level radioactive waste in the unsaturated zone; arid regions with zero to small infiltration from precipitation, great depths to the water table, and long flow paths to natural discharge areas are naturally well suited to isolation of the waste. The unsaturated zone is preferred for isolation of low-level radioactive waste. The guiding rationale is to minimize contact of water with the waste and to minimize transport of waste from the repository. The hydrology of a flow system containing a repository is greatly affected by the engineering of the repository site. Prediction of the performance of the repository is a complex problem, hampered by problems of characterizing the natural and manmade features of the flow system and by the limitations of models to predict flow and geochemical processes in the saturated and unsaturated zones. Disposal in low-permeability unfractured clays in the saturated zone may be feasible where the radionuclide transport is controlled by diffusion rather than advection.
Mining Very High Resolution INSAR Data Based On Complex-GMRF Cues And Relevance Feedback
NASA Astrophysics Data System (ADS)
Singh, Jagmal; Popescu, Anca; Soccorsi, Matteo; Datcu, Mihai
2012-01-01
With the increase in number of remote sensing satellites, the number of image-data scenes in our repositories is also increasing and a large quantity of these scenes are never received and used. Thus automatic retrieval of de- sired image-data using query by image content to fully utilize the huge repository volume is becoming of great interest. Generally different users are interested in scenes containing different kind of objects and structures. So its important to analyze all the image information mining (IIM) methods so that its easier for user to select a method depending upon his/her requirement. We concentrate our study only on high-resolution SAR images and we propose to use InSAR observations instead of only one single look complex (SLC) images for mining scenes containing coherent objects such as high-rise buildings. However in case of objects with less coherence like areas with vegetation cover, SLC images exhibits better performance. We demonstrate IIM performance comparison using complex-Gauss Markov Random Fields as texture descriptor for image patches and SVM relevance- feedback.
Citing geospatial feature inventories with XML manifests
NASA Astrophysics Data System (ADS)
Bose, R.; McGarva, G.
2006-12-01
Today published scientific papers include a growing number of citations for online information sources that either complement or replace printed journals and books. We anticipate this same trend for cartographic citations used in the geosciences, following advances in web mapping and geographic feature-based services. Instead of using traditional libraries to resolve citations for print material, the geospatial citation life cycle will include requesting inventories of objects or geographic features from distributed geospatial data repositories. Using a case study from the UK Ordnance Survey MasterMap database, which is illustrative of geographic object-based products in general, we propose citing inventories of geographic objects using XML feature manifests. These manifests: (1) serve as a portable listing of sets of versioned features; (2) could be used as citations within the identification portion of an international geospatial metadata standard; (3) could be incorporated into geospatial data transfer formats such as GML; but (4) can be resolved only with comprehensive, curated repositories of current and historic data. This work has implications for any researcher who foresees the need to make or resolve references to online geospatial databases.
Management of Object Histories in the SWALLOW Repository,
1980-07-01
time of this future version. Since the end time of the current version should not be automatically extended up to tile start time of tile token until...and T is determined by the speed with which the available online version StoraIge fills up . Unfortunately, since versions of different objects are...of these images is accessible by Illlowing tie chain of pointers in the object history. The other images use up storage, but do not have an adverse
Reconsolidated Salt as a Geotechnical Barrier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Francis D.; Gadbury, Casey
Salt as a geologic medium has several attributes favorable to long-term isolation of waste placed in mined openings. Salt formations are largely impermeable and induced fractures heal as stress returns to equilibrium. Permanent isolation also depends upon the ability to construct geotechnical barriers that achieve nearly the same high-performance characteristics attributed to the native salt formation. Salt repository seal concepts often include elements of reconstituted granular salt. As a specific case in point, the Waste Isolation Pilot Plant recently received regulatory approval to change the disposal panel closure design from an engineered barrier constructed of a salt-based concrete to onemore » that employs simple run-of-mine salt and temporary bulkheads for isolation from ventilation. The Waste Isolation Pilot Plant is a radioactive waste disposal repository for defense-related transuranic elements mined from the Permian evaporite salt beds in southeast New Mexico. Its approved shaft seal design incorporates barrier components comprising salt-based concrete, bentonite, and substantial depths of crushed salt compacted to enhance reconsolidation. This paper will focus on crushed salt behavior when applied as drift closures to isolate disposal rooms during operations. Scientific aspects of salt reconsolidation have been studied extensively. The technical basis for geotechnical barrier performance has been strengthened by recent experimental findings and analogue comparisons. The panel closure change was accompanied by recognition that granular salt will return to a physical state similar to the halite surrounding it. Use of run-of-mine salt ensures physical and chemical compatibility with the repository environment and simplifies ongoing disposal operations. Our current knowledge and expected outcome of research can be assimilated with lessons learned to put forward designs and operational concepts for the next generation of salt repositories. Mined salt repositories have the potential to isolate permanently vast inventories of radioactive and hazardous wastes.« less
Classifying BCI signals from novice users with extreme learning machine
NASA Astrophysics Data System (ADS)
Rodríguez-Bermúdez, Germán; Bueno-Crespo, Andrés; José Martinez-Albaladejo, F.
2017-07-01
Brain computer interface (BCI) allows to control external devices only with the electrical activity of the brain. In order to improve the system, several approaches have been proposed. However it is usual to test algorithms with standard BCI signals from experts users or from repositories available on Internet. In this work, extreme learning machine (ELM) has been tested with signals from 5 novel users to compare with standard classification algorithms. Experimental results show that ELM is a suitable method to classify electroencephalogram signals from novice users.
Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.
Jiménez, Fernando; Sánchez, Gracia; Juárez, José M
2014-03-01
This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. Copyright © 2014 Elsevier B.V. All rights reserved.
Connecting the pieces: Using ORCIDs to improve research impact and repositories.
Baessa, Mohamed; Lery, Thibaut; Grenz, Daryl; Vijayakumar, J K
2015-01-01
Quantitative data are crucial in the assessment of research impact in the academic world. However, as a young university created in 2009, King Abdullah University of Science and Technology (KAUST) needs to aggregate bibliometrics from researchers coming from diverse origins, not necessarily with the proper affiliations. In this context, the University has launched an institutional repository in September 2012 with the objectives of creating a home for the intellectual outputs of KAUST researchers. Later, the university adopted the first mandated institutional open access policy in the Arab region, effective June 31, 2014. Several projects were then initiated in order to accurately identify the research being done by KAUST authors and bring it into the repository in accordance with the open access policy. Integration with ORCID has been a key element in this process and the best way to ensure data quality for researcher's scientific contributions. It included the systematic inclusion and creation, if necessary, of ORCID identifiers in the existing repository system, an institutional membership in ORCID, and the creation of dedicated integration tools. In addition and in cooperation with the Office of Research Evaluation, the Library worked at implementing a Current Research Information System (CRIS) as a standardized common resource to monitor KAUST research outputs. We will present our findings about the CRIS implementation, the ORCID API, the repository statistics as well as our approach in conducting the assessment of research impact in terms of usage by the global research community.
ERIC Educational Resources Information Center
Ninness, Chris; Lauter, Judy L.; Coffee, Michael; Clary, Logan; Kelly, Elizabeth; Rumph, Marilyn; Rumph, Robin; Kyle, Betty; Ninness, Sharon K.
2012-01-01
Using 3 diversified datasets, we explored the pattern-recognition ability of the Self-Organizing Map (SOM) artificial neural network as applied to diversified nonlinear data distributions in the areas of behavioral and physiological research. Experiment 1 employed a dataset obtained from the UCI Machine Learning Repository. Data for this study…
ERIC Educational Resources Information Center
Yang, Le
2016-01-01
This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…
ERIC Educational Resources Information Center
Zervas, Panagiotis; Fiskilis, Stefanos; Sampson, Demetrios G.
2014-01-01
Over the past years, Remote and Virtual Labs (RVLs) have gained increased attention for their potential to support technology-enhanced science education by enabling science teachers to improve their day-to-day science teaching. Therefore, many educational institutions and scientific organizations have invested efforts for providing online access…
Electronic Repositories of Marked Student Work and Their Contributions to Formative Evaluation
ERIC Educational Resources Information Center
Heinrich, Eva
2004-01-01
The educational literature shows that formative assessment is highly conducive to learning. The tasks given to students in formative assessment generally require open-ended responses that can be given, for example, in essay-type format and that are assessed by a human marker. An essential component is the formative feedback provided by the marker…
NASA Astrophysics Data System (ADS)
Versteeg, R. J.; Wangerud, K.; Mattson, E.; Ankeny, M.; Richardson, A.; Heath, G.
2005-05-01
The Ruby Gulch repository at the Gilt Edge Mine Superfund site is a capped waste rock repository. Early in the system design EPA and its subcontractor, Bureau of Reclamation, recognized the need for long-term monitoring system to provide information on the repository behavior with the following objectives: 1 Provide information on the integrity of the newly constructed surface cover and diversion system 2 Continually assess the waste's hydrological and geochemical behavior, such that rational decisions can be made for the operation of this cover and liner system 3 Easily access of information pertaining to the system performance to stakeholders 4 Integration of a variety of data sources to produce information which could be used to enhance future cover designs. Through discussions between EPA, the Bureau of Reclamation and Idaho National Laboratory a long-term monitoring system was designed and implemented allowing EPA to meet these objectives. This system was designed to provide a cost effective way to deal with massive amounts of data and information, subject to the following specifications: 1 Data acquisition should occur autonomously and automatically, 2 Data management, processing and presentation should be automated as much as possible, 3 Users should be able to access all data and information remotely through a web browser. The INL long-term monitoring system integrates the data from a set of 522 electrodes resistivity electrodes consisting of 462 surface electrodes and 60 borehole electrodes (in 4 wells with 15 electrodes each), an outflow meter at the toe of the repository, an autonomous, remotely accessible weather station, and four wells (average depths of 250 feet) with thermocouples, pressure transducers and sampling ports for water and air. The monitoring system has currently been in operation for over a year, and has collected data continuously over this period. Results from this system have shown both the diurnal variation in rockmass behavior, movement of water through the waste (allowing estimated in residence time) and are leading to a comprehensive model of the repository behavior. Due to the sheer volume of data, a user driven interface allows users to create their own views of the different datasets.
Best practices for fungal germplasm repositories and perspectives on their implementation.
Wiest, Aric; Schnittker, Robert; Plamann, Mike; McCluskey, Kevin
2012-02-01
In over 50 years, the Fungal Genetics Stock Center has grown to become a world-recognized biological resource center. Along with this growth comes the development and implementation of myriad practices for the management and curation of a diverse collection of filamentous fungi, yeast, and molecular genetic tools for working with the fungi. These practices include techniques for the testing, manipulation, and preservation of individual fungal isolates as well as for processing of thousands of isolates in parallel. In addition to providing accurate record keeping, an electronic managements system allows the observation of trends in strain distribution and in sample characteristics. Because many ex situ fungal germplasm repositories around the world share similar objectives, best-practice guidelines have been developed by a number of organizations such as the Organization for Economic Cooperation and Development or the International Society for Biological and Environmental Repositories. These best-practice guidelines provide a framework for the successful operation of collections and promote the development and interactions of biological resource centers around the world.
Object-oriented structures supporting remote sensing databases
NASA Technical Reports Server (NTRS)
Wichmann, Keith; Cromp, Robert F.
1995-01-01
Object-oriented databases show promise for modeling the complex interrelationships pervasive in scientific domains. To examine the utility of this approach, we have developed an Intelligent Information Fusion System based on this technology, and applied it to the problem of managing an active repository of remotely-sensed satellite scenes. The design and implementation of the system is compared and contrasted with conventional relational database techniques, followed by a presentation of the underlying object-oriented data structures used to enable fast indexing into the data holdings.
Knowledge mining from clinical datasets using rough sets and backpropagation neural network.
Nahato, Kindie Biredagn; Harichandran, Khanna Nehemiah; Arputharaj, Kannan
2015-01-01
The availability of clinical datasets and knowledge mining methodologies encourages the researchers to pursue research in extracting knowledge from clinical datasets. Different data mining techniques have been used for mining rules, and mathematical models have been developed to assist the clinician in decision making. The objective of this research is to build a classifier that will predict the presence or absence of a disease by learning from the minimal set of attributes that has been extracted from the clinical dataset. In this work rough set indiscernibility relation method with backpropagation neural network (RS-BPNN) is used. This work has two stages. The first stage is handling of missing values to obtain a smooth data set and selection of appropriate attributes from the clinical dataset by indiscernibility relation method. The second stage is classification using backpropagation neural network on the selected reducts of the dataset. The classifier has been tested with hepatitis, Wisconsin breast cancer, and Statlog heart disease datasets obtained from the University of California at Irvine (UCI) machine learning repository. The accuracy obtained from the proposed method is 97.3%, 98.6%, and 90.4% for hepatitis, breast cancer, and heart disease, respectively. The proposed system provides an effective classification model for clinical datasets.
Martiník, Ivo
2015-01-01
Rich-media describes a broad range of digital interactive media that is increasingly used in the Internet and also in the support of education. Last year, a special pilot audiovisual lecture room was built as a part of the MERLINGO (MEdia-rich Repository of LearnING Objects) project solution. It contains all the elements of the modern lecture room determined for the implementation of presentation recordings based on the rich-media technologies and their publication online or on-demand featuring the access of all its elements in the automated mode including automatic editing. Property-preserving Petri net process algebras (PPPA) were designed for the specification and verification of the Petri net processes. PPPA does not need to verify the composition of the Petri net processes because all their algebraic operators preserve the specified set of the properties. These original PPPA are significantly generalized for the newly introduced class of the SNT Petri process and agent nets in this paper. The PLACE-SUBST and ASYNC-PROC algebraic operators are defined for this class of Petri nets and their chosen properties are proved. The SNT Petri process and agent nets theory were significantly applied at the design, verification, and implementation of the programming system ensuring the pilot audiovisual lecture room functionality.
Martiník, Ivo
2015-01-01
Rich-media describes a broad range of digital interactive media that is increasingly used in the Internet and also in the support of education. Last year, a special pilot audiovisual lecture room was built as a part of the MERLINGO (MEdia-rich Repository of LearnING Objects) project solution. It contains all the elements of the modern lecture room determined for the implementation of presentation recordings based on the rich-media technologies and their publication online or on-demand featuring the access of all its elements in the automated mode including automatic editing. Property-preserving Petri net process algebras (PPPA) were designed for the specification and verification of the Petri net processes. PPPA does not need to verify the composition of the Petri net processes because all their algebraic operators preserve the specified set of the properties. These original PPPA are significantly generalized for the newly introduced class of the SNT Petri process and agent nets in this paper. The PLACE-SUBST and ASYNC-PROC algebraic operators are defined for this class of Petri nets and their chosen properties are proved. The SNT Petri process and agent nets theory were significantly applied at the design, verification, and implementation of the programming system ensuring the pilot audiovisual lecture room functionality. PMID:26258164
10 CFR 60.140 - General requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and it will continue until permanent closure. (c) The program shall include in situ monitoring, laboratory and field testing, and in situ experiments, as may be appropriate to accomplish the objective as... REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...
10 CFR 60.140 - General requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and it will continue until permanent closure. (c) The program shall include in situ monitoring, laboratory and field testing, and in situ experiments, as may be appropriate to accomplish the objective as... REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...
10 CFR 60.140 - General requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... and it will continue until permanent closure. (c) The program shall include in situ monitoring, laboratory and field testing, and in situ experiments, as may be appropriate to accomplish the objective as... REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...
10 CFR 60.140 - General requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... and it will continue until permanent closure. (c) The program shall include in situ monitoring, laboratory and field testing, and in situ experiments, as may be appropriate to accomplish the objective as... REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...
10 CFR 60.140 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and it will continue until permanent closure. (c) The program shall include in situ monitoring, laboratory and field testing, and in situ experiments, as may be appropriate to accomplish the objective as... REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...
ERIC Educational Resources Information Center
Pardos, Zachary A.; Whyte, Anthony; Kao, Kevin
2016-01-01
In this paper, we address issues of transparency, modularity, and privacy with the introduction of an open source, web-based data repository and analysis tool tailored to the Massive Open Online Course community. The tool integrates data request/authorization and distribution workflow features as well as provides a simple analytics module upload…
ERIC Educational Resources Information Center
Limniou, Maria; Downes, John J.; Maskell, Simon
2015-01-01
Nowadays, the use of datasets is of crucial importance for the advancement of educational research. Specifically in the field of Higher Education, many researchers might share through online data repositories their research outputs in order for data to be reusable, accessible and accountable to educational community. The aim of this paper is to…
ERIC Educational Resources Information Center
Appelt, Wolfgang; Mambrey, Peter
The GMD (German National Research Center for Information Technology) has developed the BSCW (Basic Support for Cooperative Work) Shared Workspace system within the last four years with the goal of transforming the Web from a primarily passive information repository to an active cooperation medium. The BSCW system is a Web-based groupware tool for…
If I Had a Hammer (and Several Million Dollars): The Saga of the AIHEC Cultural Learning Centers.
ERIC Educational Resources Information Center
Edinger, Anne; Ambler, Marjane
2002-01-01
Presents an interview with Gail Bruce and Anne Ediger, who, in the early 1990s, conceived the idea of building cultural centers on 30 tribal college campuses. States that they imagined the centers would simply serve as repositories for Indian artifacts; however, after years of fund-raising efforts and program obstacles, the buildings transformed…
Deep Boreholes Seals Subjected to High P,T conditions - Proposed Experimental Studies
NASA Astrophysics Data System (ADS)
Caporuscio, F.
2015-12-01
Deep borehole experimental work will constrain the P,T conditions which "seal" material will experience in deep borehole crystalline rock repositories. The rocks of interest to this study include mafic (amphibolites) and silicic (granitic gneiss) end members. The experiments will systematically add components to capture discrete changes in both water and EBS component chemistries. Experiments in the system wall rock-clay-concrete-groundwater will evaluate interactions among components, including: mineral phase stability, metal corrosion rates and thermal limits. Based on engineered barrier studies, experimental investigations will move forward with three focusses. First, evaluation of interaction between "seal" materials and repository wall rock (crystalline) under fluid-saturated conditions over long-term (i.e., six-month) experiments; which reproduces the thermal pulse event of a repository. Second, perform experiments to determine the stability of zeolite minerals (analcime-wairakitess) under repository conditions. Both sets of experiments are critically important for understanding mineral paragenesis (zeolites and/or clay transformations) associated with "seals" in contact with wall rock at elevated temperatures. Third, mineral growth at the metal interface is a principal control on the survivability (i.e. corrosion) of waste canisters in a repository. The objective of this planned experimental work is to evaluate physio-chemical processes for 'seal' components and materials relevant to deep borehole disposal. These evaluations will encompass multi-laboratory efforts for the development of seals concepts and application of Thermal-Mechanical-Chemical (TMC) modeling work to assess barrier material interactions with subsurface fluids and other barrier materials, their stability at high temperatures, and the implications of these processes to the evaluation of thermal limits.
NASA Astrophysics Data System (ADS)
Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid
2011-10-01
In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.
NELS 2.0 - A general system for enterprise wide information management
NASA Technical Reports Server (NTRS)
Smith, Stephanie L.
1993-01-01
NELS, the NASA Electronic Library System, is an information management tool for creating distributed repositories of documents, drawings, and code for use and reuse by the aerospace community. The NELS retrieval engine can load metadata and source files of full text objects, perform natural language queries to retrieve ranked objects, and create links to connect user interfaces. For flexibility, the NELS architecture has layered interfaces between the application program and the stored library information. The session manager provides the interface functions for development of NELS applications. The data manager is an interface between session manager and the structured data system. The center of the structured data system is the Wide Area Information Server. This system architecture provides access to information across heterogeneous platforms in a distributed environment. There are presently three user interfaces that connect to the NELS engine; an X-Windows interface, and ASCII interface and the Spatial Data Management System. This paper describes the design and operation of NELS as an information management tool and repository.
NASA Technical Reports Server (NTRS)
Carvalho, Robert F.; Williams, James; Keller, Richard; Sturken, Ian; Panontin, Tina
2004-01-01
InvestigationOrganizer (IO) is a collaborative web-based system designed to support the conduct of mishap investigations. IO provides a common repository for a wide range of mishap related information, and allows investigators to make explicit, shared, and meaningful links between evidence, causal models, findings and recommendations. It integrates the functionality of a database, a common document repository, a semantic knowledge network, a rule-based inference engine, and causal modeling and visualization. Thus far, IO has been used to support four mishap investigations within NASA, ranging from a small property damage case to the loss of the Space Shuttle Columbia. This paper describes how the functionality of IO supports mishap investigations and the lessons learned from the experience of supporting two of the NASA mishap investigations: the Columbia Accident Investigation and the CONTOUR Loss Investigation.
Legaz-García, María del Carmen; Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás
2012-01-01
Linking Electronic Healthcare Records (EHR) content to educational materials has been considered a key international recommendation to enable clinical engagement and to promote patient safety. This would suggest citizens to access reliable information available on the web and to guide them properly. In this paper, we describe an approach in that direction, based on the use of dual model EHR standards and standardized educational contents. The recommendation method will be based on the semantic coverage of the learning content repository for a particular archetype, which will be calculated by applying semantic web technologies like ontologies and semantic annotations.
Prevention of data duplication for high throughput sequencing repositories
Gabdank, Idan; Chan, Esther T; Davidson, Jean M; Hilton, Jason A; Davis, Carrie A; Baymuradov, Ulugbek K; Narayanan, Aditi; Onate, Kathrina C; Graham, Keenan; Miyasato, Stuart R; Dreszer, Timothy R; Strattan, J Seth; Jolanki, Otto; Tanaka, Forrest Y; Hitz, Benjamin C
2018-01-01
Abstract Prevention of unintended duplication is one of the ongoing challenges many databases have to address. Working with high-throughput sequencing data, the complexity of that challenge increases with the complexity of the definition of a duplicate. In a computational data model, a data object represents a real entity like a reagent or a biosample. This representation is similar to how a card represents a book in a paper library catalog. Duplicated data objects not only waste storage, they can mislead users into assuming the model represents more than the single entity. Even if it is clear that two objects represent a single entity, data duplication opens the door to potential inconsistencies between the objects since the content of the duplicated objects can be updated independently, allowing divergence of the metadata associated with the objects. Analogously to a situation in which a catalog in a paper library would contain by mistake two cards for a single copy of a book. If these cards are listing simultaneously two different individuals as current book borrowers, it would be difficult to determine which borrower (out of the two listed) actually has the book. Unfortunately, in a large database with multiple submitters, unintended duplication is to be expected. In this article, we present three principal guidelines the Encyclopedia of DNA Elements (ENCODE) Portal follows in order to prevent unintended duplication of both actual files and data objects: definition of identifiable data objects (I), object uniqueness validation (II) and de-duplication mechanism (III). In addition to explaining our modus operandi, we elaborate on the methods used for identification of sequencing data files. Comparison of the approach taken by the ENCODE Portal vs other widely used biological data repositories is provided. Database URL: https://www.encodeproject.org/ PMID:29688363
NASA Astrophysics Data System (ADS)
Swetnam, T. L.; Walls, R.; Merchant, N.
2017-12-01
CyVerse, is a US National Science Foundation funded initiative "to design, deploy, and expand a national cyberinfrastructure for life sciences research, and to train scientists in its use," supporting and enabling cross disciplinary collaborations across institutions. CyVerse' free, open-source, cyberinfrastructure is being adopted into biogeoscience and space sciences research. CyVerse data-science agnostic platforms provide shared data storage, high performance computing, and cloud computing that allow analysis of very large data sets (including incomplete or work-in-progress data sets). Part of CyVerse success has been in addressing the handling of data through its entire lifecycle, from creation to final publication in a digital data repository to reuse in new analyses. CyVerse developers and user communities have learned many lessons that are germane to Earth and Environmental Science. We present an overview of the tools and services available through CyVerse including: interactive computing with the Discovery Environment (https://de.cyverse.org/), an interactive data science workbench featuring data storage and transfer via the Data Store; cloud computing with Atmosphere (https://atmo.cyverse.org); and access to HPC via Agave API (https://agaveapi.co/). Each CyVerse service emphasizes access to long term data storage, including our own Data Commons (http://datacommons.cyverse.org), as well as external repositories. The Data Commons service manages, organizes, preserves, publishes, allows for discovery and reuse of data. All data published to CyVerse's Curated Data receive a permanent identifier (PID) in the form of a DOI (Digital Object Identifier) or ARK (Archival Resource Key). Data that is more fluid can also be published in the Data commons through Community Collaborated data. The Data Commons provides landing pages, permanent DOIs or ARKs, and supports data reuse and citation through features such as open data licenses and downloadable citations. The ability to access and do computing on data within the CyVerse framework or with external compute resources when necessary, has proven highly beneficial to our user community, which has continuously grown since the inception of CyVerse nine years ago.
Convalescing Cluster Configuration Using a Superlative Framework
Sabitha, R.; Karthik, S.
2015-01-01
Competent data mining methods are vital to discover knowledge from databases which are built as a result of enormous growth of data. Various techniques of data mining are applied to obtain knowledge from these databases. Data clustering is one such descriptive data mining technique which guides in partitioning data objects into disjoint segments. K-means algorithm is a versatile algorithm among the various approaches used in data clustering. The algorithm and its diverse adaptation methods suffer certain problems in their performance. To overcome these issues a superlative algorithm has been proposed in this paper to perform data clustering. The specific feature of the proposed algorithm is discretizing the dataset, thereby improving the accuracy of clustering, and also adopting the binary search initialization method to generate cluster centroids. The generated centroids are fed as input to K-means approach which iteratively segments the data objects into respective clusters. The clustered results are measured for accuracy and validity. Experiments conducted by testing the approach on datasets from the UC Irvine Machine Learning Repository evidently show that the accuracy and validity measure is higher than the other two approaches, namely, simple K-means and Binary Search method. Thus, the proposed approach proves that discretization process will improve the efficacy of descriptive data mining tasks. PMID:26543895
Research Data Management Self-Education for Librarians: A Webliography
ERIC Educational Resources Information Center
Goben, Abigail; Raszewski, Rebecca
2015-01-01
As data as a scholarly object continues to grow in importance in the research community, librarians are undertaking increasing responsibilities regarding data management and curation. New library initiatives include assisting researchers in finding data sets for reuse; locating and hosting repositories for required archiving; consultations on…
Deformable segmentation via sparse representation and dictionary learning.
Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N
2012-10-01
"Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Zervas, Panagiotis; Sergis, Stylianos; Sampson, Demetrios G.; Fyskilis, Stefanos
2015-01-01
Remote and virtual labs (RVLs) are widely used by science education teachers in their daily teaching practice. This has led to a plethora of RVLs that are offered with or without cost. In order to organise them and facilitate their search and findability, several RVL web-based repositories have been operated. As a result, a key open challenge is…
ERIC Educational Resources Information Center
Lim, Ee-Lon; Hew, Khe Foon
2014-01-01
E-books offer a range of benefits to both educators and students, including ease of accessibility and searching capabilities. However, the majority of current e-books are repository-cum-delivery platforms of textual information. Hitherto, there is a lack of empirical research that examines e-books with annotative and sharing capabilities. This…
Feature weighting using particle swarm optimization for learning vector quantization classifier
NASA Astrophysics Data System (ADS)
Dongoran, A.; Rahmadani, S.; Zarlis, M.; Zakarias
2018-03-01
This paper discusses and proposes a method of feature weighting in classification assignments on competitive learning artificial neural network LVQ. The weighting feature method is the search for the weight of an attribute using the PSO so as to give effect to the resulting output. This method is then applied to the LVQ-Classifier and tested on the 3 datasets obtained from the UCI Machine Learning repository. Then an accuracy analysis will be generated by two approaches. The first approach using LVQ1, referred to as LVQ-Classifier and the second approach referred to as PSOFW-LVQ, is a proposed model. The result shows that the PSO algorithm is capable of finding attribute weights that increase LVQ-classifier accuracy.
AstroML: "better, faster, cheaper" towards state-of-the-art data mining and machine learning
NASA Astrophysics Data System (ADS)
Ivezic, Zeljko; Connolly, Andrew J.; Vanderplas, Jacob
2015-01-01
We present AstroML, a Python module for machine learning and data mining built on numpy, scipy, scikit-learn, matplotlib, and astropy, and distributed under an open license. AstroML contains a growing library of statistical and machine learning routines for analyzing astronomical data in Python, loaders for several open astronomical datasets (such as SDSS and other recent major surveys), and a large suite of examples of analyzing and visualizing astronomical datasets. AstroML is especially suitable for introducing undergraduate students to numerical research projects and for graduate students to rapidly undertake cutting-edge research. The long-term goal of astroML is to provide a community repository for fast Python implementations of common tools and routines used for statistical data analysis in astronomy and astrophysics (see http://www.astroml.org).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faybishenko, Boris; Birkholzer, Jens; Persoff, Peter
2016-09-01
The goal of the Fifth Worldwide Review is to document evolution in the state-of-the-art of approaches for nuclear waste disposal in geological formations since the Fourth Worldwide Review that was released in 2006. The last ten years since the previous Worldwide Review has seen major developments in a number of nations throughout the world pursuing geological disposal programs, both in preparing and reviewing safety cases for the operational and long-term safety of proposed and operating repositories. The countries that are approaching implementation of geological disposal will increasingly focus on the feasibility of safely constructing and operating their repositories in short-more » and long terms on the basis existing regulations. The WWR-5 will also address a number of specific technical issues in safety case development along with the interplay among stakeholder concerns, technical feasibility, engineering design issues, and operational and post-closure safety. Preparation and publication of the Fifth Worldwide Review on nuclear waste disposal facilitates assessing the lessons learned and developing future cooperation between the countries. The Report provides scientific and technical experiences on preparing for and developing scientific and technical bases for nuclear waste disposal in deep geologic repositories in terms of requirements, societal expectations and the adequacy of cases for long-term repository safety. The Chapters include potential issues that may arise as repository programs mature, and identify techniques that demonstrate the safety cases and aid in promoting and gaining societal confidence. The report will also be used to exchange experience with other fields of industry and technology, in which concepts similar to the design and safety cases are applied, as well to facilitate the public perception and understanding of the safety of the disposal approaches relative to risks that may increase over long times frames in the absence of a successful implementation of final dispositioning.« less
Evaluation on radiation protection aspect and radiological risk at Mukim Belanja repository
NASA Astrophysics Data System (ADS)
Azmi, Siti Nur Aisyah; Kenoh, Hamiza; Majid, Amran Ab.
2016-01-01
Asian Rare Earth (ARE) is a locally incorporated company that operated a mineral processing operation to extract rare earth element. ARE has received much attention from the public since the beginning of their operation until the work of decommissioning and decontamination of the plant. Due to the existence of Naturally Occurring Radioactive Material (NORM) in the residue, the decommissioning and disposal was done by the company in collaboration with the Perak State Government and the Atomic Energy Licensing Board (AELB). The main objective of this study is to review the level of compliance of the existing Radiation Protection Regulations enforced by AELB particularly in the achievement of allowed exposure dose limit. The next objective was to study the impact of the construction of the Mukim Belanja Repository to workers and public. This study was conducted by analyzing documents that were issued and conducting the area monitoring using a Geiger Muller detector (GM) and Sodium Iodide (NaI(Tl)) survey meters. The measurements were made at 5 cm and 1 m from the ground surface at 27 measurement stations. The external doses measured were within the background levels of the surrounding area. The annual effective dose using the highest reading at 5 cm and 1 m from ground surface by GM detector was calculated to be 1.36 mSv/year and 1.21 mSv/year respectively. Whereas the annual effective dose using the highest reading at 5 cm and 1 m from ground surface by using NaI(Tl) detector was calculated to be 3.31 mSv/year and 2.83 mSv/year respectively. The calculated cancer risks from the study showed that the risk is small compared with the risks derived from natural radiation based on global annual radiation dose to humans. This study therefore indicated that the repository is able to constrain the dose exposure from the disposed NORM waste. The study also revealed that the construction of the repository has complied with all the rules and regulations subjected to it. The exposed dose received by the radiation and the public workers during the construction of the repository were below the annual limit i.e. 20 mSv/year and 1mSv/year respectively.
Evaluation on radiation protection aspect and radiological risk at Mukim Belanja repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azmi, Siti Nur Aisyah, E-mail: nuclear.aisyahazmi@gmail.com; Kenoh, Hamiza; Majid, Amran Ab.
2016-01-22
Asian Rare Earth (ARE) is a locally incorporated company that operated a mineral processing operation to extract rare earth element. ARE has received much attention from the public since the beginning of their operation until the work of decommissioning and decontamination of the plant. Due to the existence of Naturally Occurring Radioactive Material (NORM) in the residue, the decommissioning and disposal was done by the company in collaboration with the Perak State Government and the Atomic Energy Licensing Board (AELB). The main objective of this study is to review the level of compliance of the existing Radiation Protection Regulations enforcedmore » by AELB particularly in the achievement of allowed exposure dose limit. The next objective was to study the impact of the construction of the Mukim Belanja Repository to workers and public. This study was conducted by analyzing documents that were issued and conducting the area monitoring using a Geiger Muller detector (GM) and Sodium Iodide (NaI(Tl)) survey meters. The measurements were made at 5 cm and 1 m from the ground surface at 27 measurement stations. The external doses measured were within the background levels of the surrounding area. The annual effective dose using the highest reading at 5 cm and 1 m from ground surface by GM detector was calculated to be 1.36 mSv/year and 1.21 mSv/year respectively. Whereas the annual effective dose using the highest reading at 5 cm and 1 m from ground surface by using NaI(Tl) detector was calculated to be 3.31 mSv/year and 2.83 mSv/year respectively. The calculated cancer risks from the study showed that the risk is small compared with the risks derived from natural radiation based on global annual radiation dose to humans. This study therefore indicated that the repository is able to constrain the dose exposure from the disposed NORM waste. The study also revealed that the construction of the repository has complied with all the rules and regulations subjected to it. The exposed dose received by the radiation and the public workers during the construction of the repository were below the annual limit i.e. 20 mSv/year and 1mSv/year respectively.« less
Content and Knowledge Management in a Digital Library and Museum.
ERIC Educational Resources Information Center
Yeh, Jian-Hua; Chang, Jia-Yang; Oyang, Yen-Jen
2000-01-01
Discusses the design of the National Taiwan University Digital Library and Museum that addresses both content and knowledge management. Describes a two-tier repository architecture that facilitates content management, includes an object-oriented model to facilitate the management of temporal information, and eliminates the need to manually…
AstroCV: Astronomy computer vision library
NASA Astrophysics Data System (ADS)
González, Roberto E.; Muñoz, Roberto P.; Hernández, Cristian A.
2018-04-01
AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.
At the Creation: Chaos, Control, and Automation--Commercial Software Development for Archives.
ERIC Educational Resources Information Center
Drr, W. Theodore
1988-01-01
An approach to the design of flexible text-based management systems for archives includes tiers for repository, software, and user management systems. Each tier has four layers--objective, program, result, and interface. Traps awaiting software development companies involve the market, competition, operations, and finance. (10 references) (MES)
Carmen Legaz-García, María Del; Miñarro-Giménez, José Antonio; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás
2016-06-03
Biomedical research usually requires combining large volumes of data from multiple heterogeneous sources, which makes difficult the integrated exploitation of such data. The Semantic Web paradigm offers a natural technological space for data integration and exploitation by generating content readable by machines. Linked Open Data is a Semantic Web initiative that promotes the publication and sharing of data in machine readable semantic formats. We present an approach for the transformation and integration of heterogeneous biomedical data with the objective of generating open biomedical datasets in Semantic Web formats. The transformation of the data is based on the mappings between the entities of the data schema and the ontological infrastructure that provides the meaning to the content. Our approach permits different types of mappings and includes the possibility of defining complex transformation patterns. Once the mappings are defined, they can be automatically applied to datasets to generate logically consistent content and the mappings can be reused in further transformation processes. The results of our research are (1) a common transformation and integration process for heterogeneous biomedical data; (2) the application of Linked Open Data principles to generate interoperable, open, biomedical datasets; (3) a software tool, called SWIT, that implements the approach. In this paper we also describe how we have applied SWIT in different biomedical scenarios and some lessons learned. We have presented an approach that is able to generate open biomedical repositories in Semantic Web formats. SWIT is able to apply the Linked Open Data principles in the generation of the datasets, so allowing for linking their content to external repositories and creating linked open datasets. SWIT datasets may contain data from multiple sources and schemas, thus becoming integrated datasets.
Uranium (VI) solubility in carbonate-free ERDA-6 brine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucchini, Jean-francois; Khaing, Hnin; Reed, Donald T
2010-01-01
When present, uranium is usually an element of importance in a nuclear waste repository. In the Waste Isolation Pilot Plant (WIPP), uranium is the most prevalent actinide component by mass, with about 647 metric tons to be placed in the repository. Therefore, the chemistry of uranium, and especially its solubility in the WIPP conditions, needs to be well determined. Long-term experiments were performed to measure the solubility of uranium (VI) in carbonate-free ERDA-6 brine, a simulated WIPP brine, at pC{sub H+} values between 8 and 12.5. These data, obtained from the over-saturation approach, were the first repository-relevant data for themore » VI actinide oxidation state. The solubility trends observed pointed towards low uranium solubility in WIPP brines and a lack of amphotericity. At the expected pC{sub H+} in the WIPP ({approx} 9.5), measured uranium solubility approached 10{sup -7} M. The objective of these experiments was to establish a baseline solubility to further investigate the effects of carbonate complexation on uranium solubility in WIPP brines.« less
Food entries in a large allergy data repository
Plasek, Joseph M.; Goss, Foster R.; Lai, Kenneth H.; Lau, Jason J.; Seger,, Diane L.; Blumenthal, Kimberly G.; Wickner, Paige G.; Slight, Sarah P.; Chang, Frank Y.; Topaz, Maxim; Bates, David W.
2016-01-01
Objective Accurate food adverse sensitivity documentation in electronic health records (EHRs) is crucial to patient safety. This study examined, encoded, and grouped foods that caused any adverse sensitivity in a large allergy repository using natural language processing and standard terminologies. Methods Using the Medical Text Extraction, Reasoning, and Mapping System (MTERMS), we processed both structured and free-text entries stored in an enterprise-wide allergy repository (Partners’ Enterprise-wide Allergy Repository), normalized diverse food allergen terms into concepts, and encoded these concepts using the Systematized Nomenclature of Medicine – Clinical Terms (SNOMED-CT) and Unique Ingredient Identifiers (UNII) terminologies. Concept coverage also was assessed for these two terminologies. We further categorized allergen concepts into groups and calculated the frequencies of these concepts by group. Finally, we conducted an external validation of MTERMS’s performance when identifying food allergen terms, using a randomized sample from a different institution. Results We identified 158 552 food allergen records (2140 unique terms) in the Partners repository, corresponding to 672 food allergen concepts. High-frequency groups included shellfish (19.3%), fruits or vegetables (18.4%), dairy (9.0%), peanuts (8.5%), tree nuts (8.5%), eggs (6.0%), grains (5.1%), and additives (4.7%). Ambiguous, generic concepts such as “nuts” and “seafood” accounted for 8.8% of the records. SNOMED-CT covered more concepts than UNII in terms of exact (81.7% vs 68.0%) and partial (14.3% vs 9.7%) matches. Discussion Adverse sensitivities to food are diverse, and existing standard terminologies have gaps in their coverage of the breadth of allergy concepts. Conclusion New strategies are needed to represent and standardize food adverse sensitivity concepts, to improve documentation in EHRs. PMID:26384406
OWLing Clinical Data Repositories With the Ontology Web Language.
Lozano-Rubí, Raimundo; Pastor, Xavier; Lozano, Esther
2014-08-01
The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slovic, P.; Layman, M.; Flynn, J.H.
1990-11-01
In July, 1989 the authors produced a report titled Perceived Risk, Stigma, and Potential Economic Impacts of a High-Level Nuclear-Waste Repository in Nevada (Slovic et al., 1989). That report described a program of research designed to assess the potential impacts of a high-level nuclear waste repository at Yucca Mountain, Nevada upon tourism, retirement and job-related migration, and business development in Las Vegas and the state. It was concluded that adverse economic impacts potentially may result from two related social processes. Specifically, the study by Slovic et al. employed analyses of imagery in order to overcome concerns about the validity ofmore » direct questions regarding the influence of a nuclear-waste repository at Yucca Mountain upon a person`s future behaviors. During the latter months of 1989, data were collected in three major telephone surveys, designed to achieve the following objectives: (1) to replicate the results from the Phoenix, Arizona, surveys using samples from other populations that contribute to tourism, migration, and development in Nevada; (2) to retest the original Phoenix respondents to determine the stability of their images across an 18-month time period and to determine whether their vacation choices subsequent to the first survey were predictable from the images they produced in that original survey; (3) to elicit additional word-association images for the stimulus underground nuclear waste repository in order to determine whether the extreme negative images generated by the Phoenix respondents would occur with other samples of respondents; and (4) to develop and test a new method for imagery elicitation, based upon a rating technique rather than on word associations. 2 refs., 8 figs., 13 tabs.« less
NASA Astrophysics Data System (ADS)
Prodanovic, M.; Esteva, M.; Ketcham, R. A.
2017-12-01
Nanometer to centimeter-scale imaging such as (focused ion beam) scattered electron microscopy, magnetic resonance imaging and X-ray (micro)tomography has since 1990s introduced 2D and 3D datasets of rock microstructure that allow investigation of nonlinear flow and mechanical phenomena on the length scales that are otherwise impervious to laboratory measurements. The numerical approaches that use such images produce various upscaled parameters required by subsurface flow and deformation simulators. All of this has revolutionized our knowledge about grain scale phenomena. However, a lack of data-sharing infrastructure among research groups makes it difficult to integrate different length scales. We have developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal (https://www.digitalrocksportal.org), that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of engineering or geosciences researchers not necessarily trained in computer science or data analysis. Digital Rocks Portal (NSF EarthCube Grant 1541008) is the first repository for imaged porous microstructure data. It is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (University of Texas at Austin). Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative. We show how the data can be documented, referenced in publications via digital object identifiers (see Figure below for examples), visualized, searched for and linked to other repositories. We show recently implemented integration of the remote parallel visualization, bulk upload for large datasets as well as preliminary flow simulation workflow with the pore structures currently stored in the repository. We discuss the issues of collecting correct metadata, data discoverability and repository sustainability.
Semantic Indexing of Medical Learning Objects: Medical Students' Usage of a Semantic Network.
Tix, Nadine; Gießler, Paul; Ohnesorge-Radtke, Ursula; Spreckelsen, Cord
2015-11-11
The Semantically Annotated Media (SAM) project aims to provide a flexible platform for searching, browsing, and indexing medical learning objects (MLOs) based on a semantic network derived from established classification systems. Primarily, SAM supports the Aachen emedia skills lab, but SAM is ready for indexing distributed content and the Simple Knowledge Organizing System standard provides a means for easily upgrading or even exchanging SAM's semantic network. There is a lack of research addressing the usability of MLO indexes or search portals like SAM and the user behavior with such platforms. The purpose of this study was to assess the usability of SAM by investigating characteristic user behavior of medical students accessing MLOs via SAM. In this study, we chose a mixed-methods approach. Lean usability testing was combined with usability inspection by having the participants complete four typical usage scenarios before filling out a questionnaire. The questionnaire was based on the IsoMetrics usability inventory. Direct user interaction with SAM (mouse clicks and pages accessed) was logged. The study analyzed the typical usage patterns and habits of students using a semantic network for accessing MLOs. Four scenarios capturing characteristics of typical tasks to be solved by using SAM yielded high ratings of usability items and showed good results concerning the consistency of indexing by different users. Long-tail phenomena emerge as they are typical for a collaborative Web 2.0 platform. Suitable but nonetheless rarely used keywords were assigned to MLOs by some users. It is possible to develop a Web-based tool with high usability and acceptance for indexing and retrieval of MLOs. SAM can be applied to indexing multicentered repositories of MLOs collaboratively.
Piloting a Deceased Subject Integrated Data Repository and Protecting Privacy of Relatives
Huser, Vojtech; Kayaalp, Mehmet; Dodd, Zeyno A.; Cimino, James J.
2014-01-01
Use of deceased subject Electronic Health Records can be an important piloting platform for informatics or biomedical research. Existing legal framework allows such research under less strict de-identification criteria; however, privacy of non-decedent must be protected. We report on creation of the decease subject Integrated Data Repository (dsIDR) at National Institutes of Health, Clinical Center and a pilot methodology to remove secondary protected health information or identifiable information (secondary PxI; information about persons other than the primary patient). We characterize available structured coded data in dsIDR and report the estimated frequencies of secondary PxI, ranging from 12.9% (sensitive token presence) to 1.1% (using stricter criteria). Federating decedent EHR data from multiple institutions can address sample size limitations and our pilot study provides lessons learned and methodology that can be adopted by other institutions. PMID:25954378
Piloting a deceased subject integrated data repository and protecting privacy of relatives.
Huser, Vojtech; Kayaalp, Mehmet; Dodd, Zeyno A; Cimino, James J
2014-01-01
Use of deceased subject Electronic Health Records can be an important piloting platform for informatics or biomedical research. Existing legal framework allows such research under less strict de-identification criteria; however, privacy of non-decedent must be protected. We report on creation of the decease subject Integrated Data Repository (dsIDR) at National Institutes of Health, Clinical Center and a pilot methodology to remove secondary protected health information or identifiable information (secondary PxI; information about persons other than the primary patient). We characterize available structured coded data in dsIDR and report the estimated frequencies of secondary PxI, ranging from 12.9% (sensitive token presence) to 1.1% (using stricter criteria). Federating decedent EHR data from multiple institutions can address sample size limitations and our pilot study provides lessons learned and methodology that can be adopted by other institutions.
PipelineDog: a simple and flexible graphic pipeline construction and maintenance tool.
Zhou, Anbo; Zhang, Yeting; Sun, Yazhou; Xing, Jinchuan
2018-05-01
Analysis pipelines are an essential part of bioinformatics research, and ad hoc pipelines are frequently created by researchers for prototyping and proof-of-concept purposes. However, most existing pipeline management system or workflow engines are too complex for rapid prototyping or learning the pipeline concept. A lightweight, user-friendly and flexible solution is thus desirable. In this study, we developed a new pipeline construction and maintenance tool, PipelineDog. This is a web-based integrated development environment with a modern web graphical user interface. It offers cross-platform compatibility, project management capabilities, code formatting and error checking functions and an online repository. It uses an easy-to-read/write script system that encourages code reuse. With the online repository, it also encourages sharing of pipelines, which enhances analysis reproducibility and accountability. For most users, PipelineDog requires no software installation. Overall, this web application provides a way to rapidly create and easily manage pipelines. PipelineDog web app is freely available at http://web.pipeline.dog. The command line version is available at http://www.npmjs.com/package/pipelinedog and online repository at http://repo.pipeline.dog. ysun@kean.edu or xing@biology.rutgers.edu or ysun@diagnoa.com. Supplementary data are available at Bioinformatics online.
Harris, Erin D; Ziniel, Sonja I; Amatruda, Jonathan G; Clinton, Catherine M; Savage, Sarah K; Taylor, Patrick L; Huntington, Noelle L; Green, Robert C; Holm, Ingrid A
2012-03-01
Little is known about parental attitudes toward return of individual research results (IRRs) in pediatric genomic research. The aim of this study was to understand the views of the parents who enrolled their children in a genomic repository in which IRRs will be returned. We conducted focus groups with parents of children with developmental disorders enrolled in the Gene Partnership (GP), a genomic research repository that offers to return IRRs, to learn about their understanding of the GP, motivations for enrolling their children, and expectations regarding the return of IRRs. Parents hoped to receive IRRs that would help them better understand their children's condition(s). They understood that this outcome was unlikely, but hoped that their children's participation in the GP would contribute to scientific knowledge. Most parents wanted to receive all IRRs about their child, even for diseases that were severe and untreatable, citing reasons of personal utility. Parents preferred electronic delivery of the results and wanted to designate their preferences regarding what information they would receive. It is important for researchers to understand participant expectations in enrolling in a research repository that offers to disclose children's IRRs in order to effectively communicate the implications to parents during the consenting process.
Designing for Change: Interoperability in a scaling and adapting environment
NASA Astrophysics Data System (ADS)
Yarmey, L.
2015-12-01
The Earth Science cyberinfrastructure landscape is constantly changing. Technologies advance and technical implementations are refined or replaced. Data types, volumes, packaging, and use cases evolve. Scientific requirements emerge and mature. Standards shift while systems scale and adapt. In this complex and dynamic environment, interoperability remains a critical component of successful cyberinfrastructure. Through the resource- and priority-driven iterations on systems, interfaces, and content, questions fundamental to stable and useful Earth Science cyberinfrastructure arise. For instance, how are sociotechnical changes planned, tracked, and communicated? How should operational stability balance against 'new and shiny'? How can ongoing maintenance and mitigation of technical debt be managed in an often short-term resource environment? The Arctic Data Explorer is a metadata brokering application developed to enable discovery of international, interdisciplinary Arctic data across distributed repositories. Completely dependent on interoperable third party systems, the Arctic Data Explorer publicly launched in 2013 with an original 3000+ data records from four Arctic repositories. Since then the search has scaled to 25,000+ data records from thirteen repositories at the time of writing. In the final months of original project funding, priorities shift to lean operations with a strategic eye on the future. Here we present lessons learned from four years of Arctic Data Explorer design, development, communication, and maintenance work along with remaining questions and potential directions.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-11
... 9:00 a.m. RFP Team-learning 9:30 a.m. Better Buying Power 2.0 10:30 a.m. Acker Knowledge Repository...: Pursuant to 5 U.S.C. 552b and 41 CFR 102-3.140 through 102-3.165, and the availability of space, this meeting is open to the public. However, because of space limitations, allocation of seating will be made...
NASA Astrophysics Data System (ADS)
Gaber, Mohamed Medhat; Zaslavsky, Arkady; Krishnaswamy, Shonali
Data mining is concerned with the process of computationally extracting hidden knowledge structures represented in models and patterns from large data repositories. It is an interdisciplinary field of study that has its roots in databases, statistics, machine learning, and data visualization. Data mining has emerged as a direct outcome of the data explosion that resulted from the success in database and data warehousing technologies over the past two decades (Fayyad, 1997,Fayyad, 1998,Kantardzic, 2003).
A Study on Course Management System Implementation in Indonesian Higher Education Institutions
NASA Astrophysics Data System (ADS)
Saputra, Y. A.; Singgih, M. L.; Latiffianti, E.; Suryani, E.; Mudjahidin
2018-04-01
Information technology development nowadays has brought new colors in the higher education learning process. In Indonesia, the current trend showed a higher use of CMS to support the existing conventional learning method in the classrooms. This paper attempts to understand the characteristics of CMS implementation based on a survey at several higher education institutions in Indonesia. There were 9 selected higher education institutions observed in this study. The objectives were to find out the CMS implementation in terms of: 1) the management of CMS implementation, 2) the evaluation, 3) originality of materials, platform, and feature; and 4) participation level. The result showed that the use of CMS in these institutions, in general, was to support the classroom conventional learning method by providing a repository of lecture notes and communication forum/media outside the classroom. The management task mostly was taken care by a specific unit. A Moodle (freeware) was found as a typical platform in use, and none of the institutions chose to use paid platform i.e. Blackboard. The accessibility of CMS used was kept closed for limited group of people due to high cost of material originality assurance. Observation also found that there was not much attempts in evaluating the success of CMS implementation in each institution, whereas the success measurement was limited to the users’ satisfaction level. The majority of institutions claimed a good internal participation level (with lecturers and students as the main users), but in general we found that lecturer participation in most institutions were low or even very low.
[Virtual Campus of Public Health: six years of human resources education in Mexico].
Ramos Herrera, Igor; Alfaro Alfaro, Noé; Fonseca León, Joel; García Sandoval, Cristóbal; González Castañeda, Miguel; López Zermeño, María Del Carmen; Benítez Morales, Ricardo
2014-11-01
This paper discusses the gestation process, implementation methodology, and results obtained from the initiative to use e-learning to train human resources for health, six years after the launch of the Virtual Campus of Public Health of the University of Guadalajara (Mexico); the discussion is framed by Pan American Health Organization (PAHO) standards and practices. This is a special report on the work done by the institutional committee of the Virtual Campus in western Mexico to create an Internet portal that follows the guidelines of the strategic model established by Nodo México and PAHO for the Region of the Americas. This Virtual Campus began its activities in 2007, on the basis of the use of free software and institutional collaboration. Since the initial year of implementation of the node, over 500 health professionals have been trained using virtual courses, the node's educational platform, and a repository of virtual learning resources that are interoperable with other repositories in Mexico and the Region of the Americas. The University of Guadalajara Virtual Campus committee has followed the proposed model as much as possible, thereby achieving most of the goals set in the initial work plan, despite a number of administrative challenges and the difficulty of motivating committee members.
Pattern recognition for cache management in distributed medical imaging environments.
Viana-Ferreira, Carlos; Ribeiro, Luís; Matos, Sérgio; Costa, Carlos
2016-02-01
Traditionally, medical imaging repositories have been supported by indoor infrastructures with huge operational costs. This paradigm is changing thanks to cloud outsourcing which not only brings technological advantages but also facilitates inter-institutional workflows. However, communication latency is one main problem in this kind of approaches, since we are dealing with tremendous volumes of data. To minimize the impact of this issue, cache and prefetching are commonly used. The effectiveness of these mechanisms is highly dependent on their capability of accurately selecting the objects that will be needed soon. This paper describes a pattern recognition system based on artificial neural networks with incremental learning to evaluate, from a set of usage pattern, which one fits the user behavior at a given time. The accuracy of the pattern recognition model in distinct training conditions was also evaluated. The solution was tested with a real-world dataset and a synthesized dataset, showing that incremental learning is advantageous. Even with very immature initial models, trained with just 1 week of data samples, the overall accuracy was very similar to the value obtained when using 75% of the long-term data for training the models. Preliminary results demonstrate an effective reduction in communication latency when using the proposed solution to feed a prefetching mechanism. The proposed approach is very interesting for cache replacement and prefetching policies due to the good results obtained since the first deployment moments.
NASA Astrophysics Data System (ADS)
Kotelnikov, E. V.; Milov, V. R.
2018-05-01
Rule-based learning algorithms have higher transparency and easiness to interpret in comparison with neural networks and deep learning algorithms. These properties make it possible to effectively use such algorithms to solve descriptive tasks of data mining. The choice of an algorithm depends also on its ability to solve predictive tasks. The article compares the quality of the solution of the problems with binary and multiclass classification based on the experiments with six datasets from the UCI Machine Learning Repository. The authors investigate three algorithms: Ripper (rule induction), C4.5 (decision trees), In-Close (formal concept analysis). The results of the experiments show that In-Close demonstrates the best quality of classification in comparison with Ripper and C4.5, however the latter two generate more compact rule sets.
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
Survey of Staff Perceptions of the AEL Resource Center.
ERIC Educational Resources Information Center
Cowley, Kimberly S.
The Resource Center at the Appalachia Educational Laboratory (AEL), Inc., provides direct services to clients both within and outside AEL, as well as serving as a repository and distribution center for educational materials. Three main objectives were identified: to discover the extent to which staff use current components of the Resource Center;…
Unified Database Development Program. Final Report.
ERIC Educational Resources Information Center
Thomas, Everett L., Jr.; Deem, Robert N.
The objective of the unified database (UDB) program was to develop an automated information system that would be useful in the design, development, testing, and support of new Air Force aircraft weapon systems. Primary emphasis was on the development of: (1) a historical logistics data repository system to provide convenient and timely access to…
Discourses of the Contemporary Urban Campus in Europe: Intimations of Americanisation?
ERIC Educational Resources Information Center
McEldowney, Malachy; Gaffikin, Frank; Perry, David C.
2009-01-01
This article studies major structural changes in both the urban context and the internal objectives of universities in Europe. While they enjoy expanded student demand and an elevated role in their city-region economy as significant creators and repositories of knowledge, they simultaneously confront a funding gap in accommodating these higher…
Metadata mapping and reuse in caBIG.
Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis
2009-02-05
This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG framework or other frameworks that use metadata repositories. The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG framework and potentially any framework that uses a metadata repository. This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG. This effort contributes to facilitating the development of interoperable systems within caBIG as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies.
Solbrig, Harold R; Chute, Christopher G
2012-01-01
Objective The objective of this study is to develop an approach to evaluate the quality of terminological annotations on the value set (ie, enumerated value domain) components of the common data elements (CDEs) in the context of clinical research using both unified medical language system (UMLS) semantic types and groups. Materials and methods The CDEs of the National Cancer Institute (NCI) Cancer Data Standards Repository, the NCI Thesaurus (NCIt) concepts and the UMLS semantic network were integrated using a semantic web-based framework for a SPARQL-enabled evaluation. First, the set of CDE-permissible values with corresponding meanings in external controlled terminologies were isolated. The corresponding value meanings were then evaluated against their NCI- or UMLS-generated semantic network mapping to determine whether all of the meanings fell within the same semantic group. Results Of the enumerated CDEs in the Cancer Data Standards Repository, 3093 (26.2%) had elements drawn from more than one UMLS semantic group. A random sample (n=100) of this set of elements indicated that 17% of them were likely to have been misclassified. Discussion The use of existing semantic web tools can support a high-throughput mechanism for evaluating the quality of large CDE collections. This study demonstrates that the involvement of multiple semantic groups in an enumerated value domain of a CDE is an effective anchor to trigger an auditing point for quality evaluation activities. Conclusion This approach produces a useful quality assurance mechanism for a clinical study CDE repository. PMID:22511016
NASA Astrophysics Data System (ADS)
Servilla, M. S.; Brunt, J.; Costa, D.; Gries, C.; Grossman-Clarke, S.; Hanson, P. C.; O'Brien, M.; Smith, C.; Vanderbilt, K.; Waide, R.
2017-12-01
In the world of data repositories, there seems to be a never ending struggle between the generation of high-quality data documentation and the ease of archiving a data product in a repository - the higher the documentation standards, the greater effort required by the scientist, and the less likely the data will be archived. The Environmental Data Initiative (EDI) attempts to balance the rigor of data documentation to the amount of effort required by a scientist to upload and archive data. As an outgrowth of the LTER Network Information System, the EDI is funded by the US NSF Division of Environmental Biology, to support the LTER, LTREB, OBFS, and MSB programs, in addition to providing an open data archive for environmental scientists without a viable archive. EDI uses the PASTA repository software, developed originally by the LTER. PASTA is metadata driven and documents data with the Ecological Metadata Language (EML), a high-fidelity standard that can describe all types of data in great detail. PASTA incorporates a series of data quality tests to ensure that data are correctly documented with EML in a process that is termed "metadata and data congruence", and incongruent data packages are forbidden in the repository. EDI reduces the burden of data documentation on scientists in two ways: first, EDI provides hands-on assistance in data documentation best practices using R and being developed in Python, for generating EML. These tools obscure the details of EML generation and syntax by providing a more natural and contextual setting for describing data. Second, EDI works closely with community information managers in defining rules used in PASTA quality tests. Rules deemed too strict can be turned off completely or just issue a warning, while the community learns to best handle the situation and improve their documentation practices. Rules can also be added or refined over time to improve overall quality of archived data. The outcome of quality tests are stored as part of the data archive in PASTA and are accessible to all users of the EDI data repository. In summary, EDI's metadata support to scientists and the comprehensive set of data quality tests for metadata and data congruency provide an ideal archive for environmental and ecological data.
Building a diabetes screening population data repository using electronic medical records.
Tuan, Wen-Jan; Sheehy, Ann M; Smith, Maureen A
2011-05-01
There has been a rapid advancement of information technology in the area of clinical and population health data management since 2000. However, with the fast growth of electronic medical records (EMRs) and the increasing complexity of information systems, it has become challenging for researchers to effectively access, locate, extract, and analyze information critical to their research. This article introduces an outpatient encounter data framework designed to construct an EMR-based population data repository for diabetes screening research. The outpatient encounter data framework is developed on a hybrid data structure of entity-attribute-value models, dimensional models, and relational models. This design preserves a small number of subject-specific tables essential to key clinical constructs in the data repository. It enables atomic information to be maintained in a transparent and meaningful way to researchers and health care practitioners who need to access data and still achieve the same performance level as conventional data warehouse models. A six-layer information processing strategy is developed to extract and transform EMRs to the research data repository. The data structure also complies with both Health Insurance Portability and Accountability Act regulations and the institutional review board's requirements. Although developed for diabetes screening research, the design of the outpatient encounter data framework is suitable for other types of health service research. It may also provide organizations a tool to improve health care quality and efficiency, consistent with the "meaningful use" objectives of the Health Information Technology for Economic and Clinical Health Act. © 2011 Diabetes Technology Society.
Evaluating the Impact of Wikis on Student Learning Outcomes: An Integrative Review.
Trocky, Nina M; Buckley, Kathleen M
2016-01-01
Although wikis appear to have been reported as effective tools for educators, uncertainty exists as to their effectiveness in achieving student learning outcomes. The aim of this integrative review was to examine the current evidence on the impact of wikis on student learning in courses requiring collaborative or co-developed assignments or activities. The authors searched several electronic databases for relevant articles and used R. Whittemore and K. Knafl's (2005) integrative review method to analyze and synthesize the evidence. Twenty-five articles met the selection criteria for this review, and four major themes for wiki use were identified: (a) writing skills, (b) collaboration, (c) knowledge acquisition, and (d) centralized repository. Although wikis have been found useful in improving student learning outcomes and hold great potential as an instructional strategy to aid students in learning various skills and gaining new knowledge, more research is needed on their effectiveness, especially in the area of nursing education. Copyright © 2016 Elsevier Inc. All rights reserved.
AstroML: Python-powered Machine Learning for Astronomy
NASA Astrophysics Data System (ADS)
Vander Plas, Jake; Connolly, A. J.; Ivezic, Z.
2014-01-01
As astronomical data sets grow in size and complexity, automated machine learning and data mining methods are becoming an increasingly fundamental component of research in the field. The astroML project (http://astroML.org) provides a common repository for practical examples of the data mining and machine learning tools used and developed by astronomical researchers, written in Python. The astroML module contains a host of general-purpose data analysis and machine learning routines, loaders for openly-available astronomical datasets, and fast implementations of specific computational methods often used in astronomy and astrophysics. The associated website features hundreds of examples of these routines being used for analysis of real astronomical datasets, while the associated textbook provides a curriculum resource for graduate-level courses focusing on practical statistics, machine learning, and data mining approaches within Astronomical research. This poster will highlight several of the more powerful and unique examples of analysis performed with astroML, all of which can be reproduced in their entirety on any computer with the proper packages installed.
NASA Technical Reports Server (NTRS)
1982-01-01
The space option for disposal of certain high-level nuclear wastes in space as a complement to mined geological repositories is studied. A brief overview of the study background, scope, objective, guidelines and assumptions, and contents is presented. The determination of the effects of variations in the waste mix on the space systems concept to allow determination of the space systems effect on total system risk benefits when used as a complement to the DOE reference mined geological repository is studied. The waste payload system, launch site, launch system, and orbit transfer system are all addressed. Rescue mission requirements are studied. The characteristics of waste forms suitable for space disposal are identified. Trajectories and performance requirements are discussed.
NASA Astrophysics Data System (ADS)
Doran, Rosa
Bringing space exploration recent results and future challenges and opportunities to the knowledge of students has been a preoccupation of educators and space agencies for quite some time. The will to foster student’s interest and reawaken their interest for science topics and in particular research is something occupying the minds of educators in all corners of the globe. But the challenge is growing literally at the speed of light. We are in the age of “Big Data”. Information is available, opportunities to build smart algorithms flourishing. The problem at hand is how we are going to make use of all this possibilities. How can we prepare students to the challenges already upon them? How can we create a scientifically literate and conscious new generation? They are the future of mankind and therefore this is a priority and should quickly be recognized as such. Empowering teachers for this challenge is the key to face the challenges and hold the opportunities. Teachers and students need to learn how to establish fruitful collaboration in the pursuit of meaningful teaching and learning experiences. Teachers need to embrace the opportunities this ICT world is offering and accompany student’s path as tutors and not as explorers themselves. In this training session we intend to explore tools and repositories that bring real cutting edge science to the hands of educators and their students. A full space exploration will be revealed. Planetarium Software - Some tools tailored to prepare an observing session or to explore space mission’s results will be presented in this topic. Participants will also have the opportunity to learn how to plan an observing session. This reveals to be an excellent tool to teach about celestial movements and give students a sense of what it means to explore for instance the Solar System. Robotic Telescopes and Radio Antennas - Having planned an observing session the participants will be introduced to the use of robotic telescopes, a very powerful tool that allows educators to address a diversity of topics ranging from ICT tools to the Exploration of our Universe. Instead of using traditional methods to teach about certain subjects for instance: stellar spectra, extra-solar planets or the classification of galaxies, they can use these powerful tools. Among other advantages a clear benefit of such tool is that teachers can use telescopes during regular classroom hours, provided they choose one located in the opposite part of the planet, where it is night time. Participants will also have the opportunity to use one of the radio antennas devoted for education from the EUHOU Consortium (European Hands-on Universe). A map of the arms of our galaxy will be built during the training session. Image Processing - After acquiring the images participants will be introduced to Salsa J, an image processing software that allows educators to explore the potential of astronomical images. The first example will be a simple measurement task: measuring craters on the Moon. Further exploration will guide them from luminosity studies to the construction of colour images, from making movies exhibiting the circular motion of the Sun to Jupiter Moons dance around the planet. e-learning repositories - In the ICT age it is very important that educators have support and know where to find meaningful and curriculum adapted resources for the construction of modern lessons. Some repositories will be presented in this session. Examples of such repositories are: Discover the Cosmos and EUHOU or a congregator of such repositories with quite advanced possibilities to support the work of teachers, the Open Discovery Space portal. This type of sessions are being successfully implemented by the Galileo Teacher Training Program team in Portugal under the scope of the EC funded GO-LAB project. This is a project devoted to demonstrate innovative ways to involve teachers and students in e-Science through the use of virtual labs, that simulate experiments, in order to spark young people’s interest in science and in following scientific careers.
Rock and Core Repository Coming Digital
NASA Astrophysics Data System (ADS)
Maicher, Doris; Fleischer, Dirk; Czerniak, Andreas
2016-04-01
In times of whole city centres being available by a mouse click in 3D to virtually walk through, reality sometimes becomes neglected. The reality of scientific sample collections not being digitised to the essence of molecules, isotopes and electrons becomes unbelievable to the upgrowing generation of scientists. Just like any other geological institute the Helmholtz Centre for Ocean Research GEOMAR accumulated thousands of specimen. The samples, collected mainly during marine expeditions, date back as far as 1964. Today GEOMAR houses a central geological sample collection of at least 17 000 m of sediment core and more than 4 500 boxes with hard rock samples and refined sample specimen. This repository, having been dormant, missed the onset of the interconnected digital age. Physical samples without barcodes, QR codes or RFID tags need to be migrated and reconnected, urgently. In our use case, GEOMAR opted for the International Geo Sample Number IGSN as the persistent identifier. Consequentially, the software CurationDIS by smartcube GmbH as the central component of this project was selected. The software is designed to handle acquisition and administration of sample material and sample archiving in storage places. In addition, the software allows direct embedding of IGSN. We plan to adopt IGSN as a future asset, while for the initial inventory taking of our sample material, simple but unique QR codes act as "bridging identifiers" during the process. Currently we compile an overview of the broad variety of sample types and their associated data. QR-coding of the boxes of rock samples and sediment cores is near completion, delineating their location in the repository and linking a particular sample to any information available about the object. Planning is in progress to streamline the flow from receiving new samples to their curation to sharing samples and information publically. Additionally, interface planning for linkage to GEOMAR databases OceanRep (publications) and OSIS (expeditions) as well as for external data retrieval are in the pipeline. Looking ahead to implement IGSN, taking on board lessons learned from earlier generations, it will enable to comply with our institute's open science policy. Also it will allow to register newly collected samples already during ship expeditions. They thus receive their "birth certificate" contemporarily in this ever faster revolving scientific world.
NASA Astrophysics Data System (ADS)
Delgado, F. J.; Martinez, R.; Finat, J.; Martinez, J.; Puche, J. C.; Finat, F. J.
2013-07-01
In this work we develop a multiply interconnected system which involves objects, agents and interactions between them from the use of ICT applied to open repositories, users communities and web services. Our approach is applied to Architectural Cultural Heritage Environments (ACHE). It includes components relative to digital accessibility (to augmented ACHE repositories), contents management (ontologies for the semantic web), semiautomatic recognition (to ease the reuse of materials) and serious videogames (for interaction in urban environments). Their combination provides a support for local real/remote virtual tourism (including some tools for low-level RT display of rendering in portable devices), mobile-based smart interactions (with a special regard to monitored environments) and CH related games (as extended web services). Main contributions to AR models on usual GIS applied to architectural environments, concern to an interactive support performed directly on digital files which allows to access to CH contents which are referred to GIS of urban districts (involving facades, historical or preindustrial buildings) and/or CH repositories in a ludic and transversal way to acquire cognitive, medial and social abilities in collaborative environments.
Current Status of the Nuclear Waste Management Programme in Finland - 13441
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehto, Kimmo; Vuorio, Petteri
2013-07-01
Pursuant to the Decision-in-Principle of 2001 the Finnish programme for geologic disposal of spent fuel has now moved to the phase of applying for construction licence to build up the encapsulation plant and underground repository. The main objective of former programme phase, underground characterisation phase, was to confirm - or refute - the suitability of the Olkiluoto site by investigations conducted underground at the actual depth of the repository. The construction work of the access tunnel to the rock characterisation facility (ONKALO) started in the late summer of 2004. The site research and investigations work aimed at the maturity neededmore » for submission of the application for construction license of the actual repository in end of 2012. This requires, however, that also the technology has reached the maturity needed. The design and technical plans form the necessary platform for the development of the safety case for spent fuel disposal. A plan, 'road map', has been produced for the portfolio of reports that demonstrates the safety of disposal as required by the criteria set by the government and further detailed by the safety authority, STUK. (authors)« less
Method and system of integrating information from multiple sources
Alford, Francine A [Livermore, CA; Brinkerhoff, David L [Antioch, CA
2006-08-15
A system and method of integrating information from multiple sources in a document centric application system. A plurality of application systems are connected through an object request broker to a central repository. The information may then be posted on a webpage. An example of an implementation of the method and system is an online procurement system.
Envoicing Silent Objects: Art and Literature at the Site of the Canadian Landscape
ERIC Educational Resources Information Center
Brock, Richard
2008-01-01
In this article, the author examines some of the ways in which art and literature converge upon the site of the Canadian landscape, generating an "ekphrastic" conception of place which reminds everyone constantly that every framed, static view of a landscape represents a story house, a repository of narratives concerning all those…
Information Technology and the Evolution of the Library
2009-03-01
Resource Commons/ Repository/ Federated Search ILS (GLADIS/Pathfinder - Millenium)/ Catalog/ Circulation/ Acquisitions/ Digital Object Content...content management services to help centralize and distribute digi- tal content from across the institution, software to allow for seamless federated ... search - ing across multiple databases, and imaging software to allow for daily reimaging of ter- minals to reduce security concerns that otherwise
ERIC Educational Resources Information Center
Hsiung, Chin-Min; Zheng, Xiang-Xiang
2015-01-01
The Measurements for Team Functioning (MTF) database contains a series of student academic performance measurements obtained at a national university in Taiwan. The measurements are acquired from unit tests and homework tests performed during a core mechanical engineering course, and provide an objective means of assessing the functioning of…
USDA-ARS?s Scientific Manuscript database
Purpose: Our objective was to investigate if insulin-like growth factor (IGF) axis genes affect the risk for age-related macular degeneration (AMD). Methods: 864 Caucasian non-diabetic participants from the Age-Related Eye Disease Study (AREDS) Genetic Repository were used in this case control st...
SVS: data and knowledge integration in computational biology.
Zycinski, Grzegorz; Barla, Annalisa; Verri, Alessandro
2011-01-01
In this paper we present a framework for structured variable selection (SVS). The main concept of the proposed schema is to take a step towards the integration of two different aspects of data mining: database and machine learning perspective. The framework is flexible enough to use not only microarray data, but other high-throughput data of choice (e.g. from mass spectrometry, microarray, next generation sequencing). Moreover, the feature selection phase incorporates prior biological knowledge in a modular way from various repositories and is ready to host different statistical learning techniques. We present a proof of concept of SVS, illustrating some implementation details and describing current results on high-throughput microarray data.
A Comparative Study with RapidMiner and WEKA Tools over some Classification Techniques for SMS Spam
NASA Astrophysics Data System (ADS)
Foozy, Cik Feresa Mohd; Ahmad, Rabiah; Faizal Abdollah, M. A.; Chai Wen, Chuah
2017-08-01
SMS Spamming is a serious attack that can manipulate the use of the SMS by spreading the advertisement in bulk. By sending the unwanted SMS that contain advertisement can make the users feeling disturb and this against the privacy of the mobile users. To overcome these issues, many studies have proposed to detect SMS Spam by using data mining tools. This paper will do a comparative study using five machine learning techniques such as Naïve Bayes, K-NN (K-Nearest Neighbour Algorithm), Decision Tree, Random Forest and Decision Stumps to observe the accuracy result between RapidMiner and WEKA for dataset SMS Spam UCI Machine Learning repository.
Diagnosing Parkinson's Diseases Using Fuzzy Neural System
Abiyev, Rahib H.; Abizade, Sanan
2016-01-01
This study presents the design of the recognition system that will discriminate between healthy people and people with Parkinson's disease. A diagnosing of Parkinson's diseases is performed using fusion of the fuzzy system and neural networks. The structure and learning algorithms of the proposed fuzzy neural system (FNS) are presented. The approach described in this paper allows enhancing the capability of the designed system and efficiently distinguishing healthy individuals. It was proved through simulation of the system that has been performed using data obtained from UCI machine learning repository. A comparative study was carried out and the simulation results demonstrated that the proposed fuzzy neural system improves the recognition rate of the designed system. PMID:26881009
Criteria for the evaluation and certification of long-term digital archives in the earth sciences
NASA Astrophysics Data System (ADS)
Klump, Jens
2010-05-01
Digital information has become an indispensable part of our cultural and scientific heritage. Scientific findings, historical documents and cultural achievements are to a rapidly increasing extent being presented in electronic form - in many cases exclusively so. However, besides the invaluable advantages offered by this form, it also carries a serious disadvantage: users need to invest a great deal of technical effort in accessing the information. Also, the underlying technology is still undergoing further development at an exceptionally fast pace. The rapid obsolescence of the technology required to read the information combined with the frequently imperceptible physical decay of the media themselves represents a serious threat to preservation of the information content. Many data sets in earth science research are from observations that cannot be repeated. This makes these digital assets particularly valuable. Therefore, these data should be kept and made available for re-use long after the end of the project from which they originated. Since research projects only run for a relatively short period of time, it is advisable to shift the burden of responsibility for long-term data curation from the individual researcher to a trusted data repository or archive. But what makes a trusted data repository? Each trusted digital repository has its own targets and specifications. The trustworthiness of digital repositories can be tested and assessed on the basis of a criteria catalogue. This is the main focus of the work of the nestor working group "Trusted repositories - Certification". It identifies criteria which permit the trustworthiness of a digital repository to be evaluated, both at the organisational and technical levels. The criteria are defined in close collaboration with a wide range of different memory organisations, producers of information, experts and other interested parties. This open approach ensures a high degree of universal validity, suitability for daily practical use and also broad-based acceptance of the results. The criteria catalogue is also intended to present the option of documenting trustworthiness by means of certification in a standardised national or international process. The criteria catalogue is based on the Reference Model for an Open Archival Information System (OAIS, ISO 14721:2003) With its broad approach, the nestor criteria catalogue for trusted digital repositories has to remain on a high level of abstraction. For application in the earth sciences the evaluation criteria need to be transferred into the context of earth science data and their designated user community. This presentation offers a brief introduction to the problems surrounding the long-term preservation of digital objects. This introduction is followed by a proposed application of the criteria catalogue for trusted digital repositories to the context of earth science data and their long-term preservation.
NASA Astrophysics Data System (ADS)
Tudose, Alexandru; Terstyansky, Gabor; Kacsuk, Peter; Winter, Stephen
Grid Application Repositories vary greatly in terms of access interface, security system, implementation technology, communication protocols and repository model. This diversity has become a significant limitation in terms of interoperability and inter-repository access. This paper presents the Grid Application Meta-Repository System (GAMRS) as a solution that offers better options for the management of Grid applications. GAMRS proposes a generic repository architecture, which allows any Grid Application Repository (GAR) to be connected to the system independent of their underlying technology. It also presents applications in a uniform manner and makes applications from all connected repositories visible to web search engines, OGSI/WSRF Grid Services and other OAI (Open Archive Initiative)-compliant repositories. GAMRS can also function as a repository in its own right and can store applications under a new repository model. With the help of this model, applications can be presented as embedded in virtual machines (VM) and therefore they can be run in their native environments and can easily be deployed on virtualized infrastructures allowing interoperability with new generation technologies such as cloud computing, application-on-demand, automatic service/application deployments and automatic VM generation.
Data2Paper: A stakeholder-driven solution to data publication and citation challenges
NASA Astrophysics Data System (ADS)
Murphy, Fiona; Jefferies, Neil; Ingraham, Thomas; Murray, Hollydawn; Ranganathan, Anusha
2017-04-01
Data, and especially open data, are valuable to the community but can also be valuable to the researcher. Data papers are a clear and open way to publicize and contextualize your data in a way that is citable and aids both reproducibility and efficiency in scholarly endeavour. However, this is not yet a format that is well understood or proliferating amongst the mainstream research community. Part of the Jisc Data Spring Initiative, a team of stakeholders (publishers, data repository managers, coders) have been developing a simple 'one-click' process where data, metadata and methods detail are transferred from a data repository (via a SWORD-based API and a cloud-based helper app based on the Fedora/Hydra platform) to a relevant publisher platform for publication as a data paper. Relying on automated processes: using ORCIDs to authenticate and pre-populate article templates and building on the DOI infrastructure to encourage provenance and citation, the app seeks to drive the deposit of data in repositories and encourage the growth of data papers by simplifying the process through the removal of redundant metadata entry and streamlining publisher submissions into a single consistent workflow. This poster will explain the underlying rationale and evidence gathering, development, partnerships, governance and other progress that this project has so far achieved. It will outline some key learning opportunities, challenges and drivers and explore the next steps.
48 CFR 227.7207 - Contractor data repositories.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Computer Software and Computer Software Documentation 227.7207 Contractor data repositories. Follow 227.7108 when it is in the Government's interests to have a data repository include computer software or to have a separate computer software repository. Contractual instruments establishing the repository...
48 CFR 227.7207 - Contractor data repositories.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Computer Software and Computer Software Documentation 227.7207 Contractor data repositories. Follow 227.7108 when it is in the Government's interests to have a data repository include computer software or to have a separate computer software repository. Contractual instruments establishing the repository...
48 CFR 227.7207 - Contractor data repositories.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Computer Software and Computer Software Documentation 227.7207 Contractor data repositories. Follow 227.7108 when it is in the Government's interests to have a data repository include computer software or to have a separate computer software repository. Contractual instruments establishing the repository...
48 CFR 227.7207 - Contractor data repositories.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Computer Software and Computer Software Documentation 227.7207 Contractor data repositories. Follow 227.7108 when it is in the Government's interests to have a data repository include computer software or to have a separate computer software repository. Contractual instruments establishing the repository...
Transfer, Storage and Oversight of the Warren Air Force Base Cohort Serum Repository and Data Assets
2009-07-08
epidemiology, natural history and treatment of group A streptococcal infections as well as prevention of complications was learned during the Warren...Conflict. The sera were obtained as a part of the extensive studies of streptococcal infections and rheumatic fever carried out by Dr. Charles...studies between 1949 and 1952. The Warren laboratory and group of investigators were awarded a Lasker Prize for these sentinel studies in 1954. 4
Granite disposal of U.S. high-level radioactive waste.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeze, Geoffrey A.; Mariner, Paul E.; Lee, Joon H.
This report evaluates the feasibility of disposing U.S. high-level radioactive waste in granite several hundred meters below the surface of the earth. The U.S. has many granite formations with positive attributes for permanent disposal. Similar crystalline formations have been extensively studied by international programs, two of which, in Sweden and Finland, are the host rocks of submitted or imminent repository license applications. This report is enabled by the advanced work of the international community to establish functional and operational requirements for disposal of a range of waste forms in granite media. In this report we develop scoping performance analyses, basedmore » on the applicable features, events, and processes (FEPs) identified by international investigators, to support generic conclusions regarding post-closure safety. Unlike the safety analyses for disposal in salt, shale/clay, or deep boreholes, the safety analysis for a mined granite repository depends largely on waste package preservation. In crystalline rock, waste packages are preserved by the high mechanical stability of the excavations, the diffusive barrier of the buffer, and favorable chemical conditions. The buffer is preserved by low groundwater fluxes, favorable chemical conditions, backfill, and the rigid confines of the host rock. An added advantage of a mined granite repository is that waste packages would be fairly easy to retrieve, should retrievability be an important objective. The results of the safety analyses performed in this study are consistent with the results of comprehensive safety assessments performed for sites in Sweden, Finland, and Canada. They indicate that a granite repository would satisfy established safety criteria and suggest that a small number of FEPs would largely control the release and transport of radionuclides. In the event the U.S. decides to pursue a potential repository in granite, a detailed evaluation of these FEPs would be needed to inform site selection and safety assessment.« less
PGP repository: a plant phenomics and genomics data publication infrastructure
Arend, Daniel; Junker, Astrid; Scholz, Uwe; Schüler, Danuta; Wylie, Juliane; Lange, Matthias
2016-01-01
Plant genomics and phenomics represents the most promising tools for accelerating yield gains and overcoming emerging crop productivity bottlenecks. However, accessing this wealth of plant diversity requires the characterization of this material using state-of-the-art genomic, phenomic and molecular technologies and the release of subsequent research data via a long-term stable, open-access portal. Although several international consortia and public resource centres offer services for plant research data management, valuable digital assets remains unpublished and thus inaccessible to the scientific community. Recently, the Leibniz Institute of Plant Genetics and Crop Plant Research and the German Plant Phenotyping Network have jointly initiated the Plant Genomics and Phenomics Research Data Repository (PGP) as infrastructure to comprehensively publish plant research data. This covers in particular cross-domain datasets that are not being published in central repositories because of its volume or unsupported data scope, like image collections from plant phenotyping and microscopy, unfinished genomes, genotyping data, visualizations of morphological plant models, data from mass spectrometry as well as software and documents. The repository is hosted at Leibniz Institute of Plant Genetics and Crop Plant Research using e!DAL as software infrastructure and a Hierarchical Storage Management System as data archival backend. A novel developed data submission tool was made available for the consortium that features a high level of automation to lower the barriers of data publication. After an internal review process, data are published as citable digital object identifiers and a core set of technical metadata is registered at DataCite. The used e!DAL-embedded Web frontend generates for each dataset a landing page and supports an interactive exploration. PGP is registered as research data repository at BioSharing.org, re3data.org and OpenAIRE as valid EU Horizon 2020 open data archive. Above features, the programmatic interface and the support of standard metadata formats, enable PGP to fulfil the FAIR data principles—findable, accessible, interoperable, reusable. Database URL: http://edal.ipk-gatersleben.de/repos/pgp/ PMID:27087305
Burchill, Charles; Fergusson, Patricia; Jebamani, Laurel; Turner, Ken; Dueck, Stephen
2000-01-01
Background Comprehensive data available in the Canadian province of Manitoba since 1970 have aided study of the interaction between population health, health care utilization, and structural features of the health care system. Given a complex linked database and many ongoing projects, better organization of available epidemiological, institutional, and technical information was needed. Objective The Manitoba Centre for Health Policy and Evaluation wished to develop a knowledge repository to handle data, document research methods, and facilitate both internal communication and collaboration with other sites. Methods This evolving knowledge repository consists of both public and internal (restricted access) pages on the World Wide Web (WWW). Information can be accessed using an indexed logical format or queried to allow entry at user-defined points. The main topics are: Concept Dictionary, Research Definitions, Meta-Index, and Glossary. The Concept Dictionary operationalizes concepts used in health research using administrative data, outlining the creation of complex variables. Research Definitions specify the codes for common surgical procedures, tests, and diagnoses. The Meta-Index organizes concepts and definitions according to the Medical Sub-Heading (MeSH) system developed by the National Library of Medicine. The Glossary facilitates navigation through the research terms and abbreviations in the knowledge repository. An Education Resources heading presents a web-based graduate course using substantial amounts of material in the Concept Dictionary, a lecture in the Epidemiology Supercourse, and material for Manitoba's Regional Health Authorities. Confidential information (including Data Dictionaries) is available on the Centre's internal website. Results Use of the public pages has increased dramatically since January 1998, with almost 6,000 page hits from 250 different hosts in May 1999. More recently, the number of page hits has averaged around 4,000 per month, while the number of unique hosts has climbed to around 400. Conclusions This knowledge repository promotes standardization and increases efficiency by placing concepts and associated programming in the Centre's collective memory. Collaboration and project management are facilitated. PMID:11720929
Academic Research Library as Broker in Addressing Interoperability Challenges for the Geosciences
NASA Astrophysics Data System (ADS)
Smith, P., II
2015-12-01
Data capture is an important process in the research lifecycle. Complete descriptive and representative information of the data or database is necessary during data collection whether in the field or in the research lab. The National Science Foundation's (NSF) Public Access Plan (2015) mandates the need for federally funded projects to make their research data more openly available. Developing, implementing, and integrating metadata workflows into to the research process of the data lifecycle facilitates improved data access while also addressing interoperability challenges for the geosciences such as data description and representation. Lack of metadata or data curation can contribute to (1) semantic, (2) ontology, and (3) data integration issues within and across disciplinary domains and projects. Some researchers of EarthCube funded projects have identified these issues as gaps. These gaps can contribute to interoperability data access, discovery, and integration issues between domain-specific and general data repositories. Academic Research Libraries have expertise in providing long-term discovery and access through the use of metadata standards and provision of access to research data, datasets, and publications via institutional repositories. Metadata crosswalks, open archival information systems (OAIS), trusted-repositories, data seal of approval, persistent URL, linking data, objects, resources, and publications in institutional repositories and digital content management systems are common components in the library discipline. These components contribute to a library perspective on data access and discovery that can benefit the geosciences. The USGS Community for Data Integration (CDI) has developed the Science Support Framework (SSF) for data management and integration within its community of practice for contribution to improved understanding of the Earth's physical and biological systems. The USGS CDI SSF can be used as a reference model to map to EarthCube Funded projects with academic research libraries facilitating the data and information assets components of the USGS CDI SSF via institutional repositories and/or digital content management. This session will explore the USGS CDI SSF for cross-discipline collaboration considerations from a library perspective.
40 CFR 124.33 - Information repository.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Information repository. 124.33 Section... FOR DECISIONMAKING Specific Procedures Applicable to RCRA Permits § 124.33 Information repository. (a... basis, for an information repository. When assessing the need for an information repository, the...
10 CFR 60.130 - General considerations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... REPOSITORIES Technical Criteria Design Criteria for the Geologic Repository Operations Area § 60.130 General... for a high-level radioactive waste repository at a geologic repository operations area, and an... geologic repository operations area, must include the principal design criteria for a proposed facility...
A machine learning-based framework to identify type 2 diabetes through electronic health records
Zheng, Tao; Xie, Wei; Xu, Liling; He, Xiaoying; Zhang, Ya; You, Mingrong; Yang, Gong; Chen, You
2016-01-01
Objective To discover diverse genotype-phenotype associations affiliated with Type 2 Diabetes Mellitus (T2DM) via genome-wide association study (GWAS) and phenome-wide association study (PheWAS), more cases (T2DM subjects) and controls (subjects without T2DM) are required to be identified (e.g., via Electronic Health Records (EHR)). However, existing expert based identification algorithms often suffer in a low recall rate and could miss a large number of valuable samples under conservative filtering standards. The goal of this work is to develop a semi-automated framework based on machine learning as a pilot study to liberalize filtering criteria to improve recall rate with a keeping of low false positive rate. Materials and methods We propose a data informed framework for identifying subjects with and without T2DM from EHR via feature engineering and machine learning. We evaluate and contrast the identification performance of widely-used machine learning models within our framework, including k-Nearest-Neighbors, Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression. Our framework was conducted on 300 patient samples (161 cases, 60 controls and 79 unconfirmed subjects), randomly selected from 23,281 diabetes related cohort retrieved from a regional distributed EHR repository ranging from 2012 to 2014. Results We apply top-performing machine learning algorithms on the engineered features. We benchmark and contrast the accuracy, precision, AUC, sensitivity and specificity of classification models against the state-of-the-art expert algorithm for identification of T2DM subjects. Our results indicate that the framework achieved high identification performances (∼0.98 in average AUC), which are much higher than the state-of-the-art algorithm (0.71 in AUC). Discussion Expert algorithm-based identification of T2DM subjects from EHR is often hampered by the high missing rates due to their conservative selection criteria. Our framework leverages machine learning and feature engineering to loosen such selection criteria to achieve a high identification rate of cases and controls. Conclusions Our proposed framework demonstrates a more accurate and efficient approach for identifying subjects with and without T2DM from EHR. PMID:27919371
Experiments with Analytic Centers: A confluence of data, tools and help in using them.
NASA Astrophysics Data System (ADS)
Little, M. M.; Crichton, D. J.; Hines, K.; Cole, M.; Quam, B. M.
2017-12-01
Traditional repositories have been primarily focused on data stewardship. Over the past two decades, data scientists have attempted to overlay a superstructure to make these repositories more amenable to analysis tasks, with limited success. This poster will summarize lessons learned and some realizations regarding what it takes to create an analytic center. As the volume of Earth Science data grows and the sophistication of analytic tools improves, a pattern has emerged that indicates different science communities uniquely apply a selection of tools to the data to produce scientific results. Infrequently do the experiences of one group help steer other groups. How can the information technology community seed these domains with tools that conform to the thought processes and experiences of that particular science group? What types of succcessful technology infusions have occured and how does technology get adopted. AIST has been experimenting with the management of this analytic center process; this paper will summarize the results and indicate a direction for future infusion attempts.
Rural migration in Nevada: Lincoln County. Phase 1, 1992--1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soden, D.L.; Carns, D.E.; Mosser, D.
1993-12-31
The principal objective of this project was to develop insight into the scope of migration of working age Nevadans out of their county of birth; including the collection of data on their skill levels, desire to out or in-migrate, interactions between families of migratory persons, and the impact that the proposed high-level nuclear waste repository at Yucca mountain might have on their individual, and collective, decisions to migrate and return. The initial phase of this project reported here was conducted in 1992 and 1993 in Lincoln County, Nevada, one of the counties designated as ``affected`` by the proposed repository program.more » The findings suggest that a serious out-migration problem exists in Lincoln County, and that the Yucca mountain project will likely affect decisions relating to migration patterns in the future.« less
Repository-based software engineering program: Concept document
NASA Technical Reports Server (NTRS)
1992-01-01
This document provides the context for Repository-Based Software Engineering's (RBSE's) evolving functional and operational product requirements, and it is the parent document for development of detailed technical and management plans. When furnished, requirements documents will serve as the governing RBSE product specification. The RBSE Program Management Plan will define resources, schedules, and technical and organizational approaches to fulfilling the goals and objectives of this concept. The purpose of this document is to provide a concise overview of RBSE, describe the rationale for the RBSE Program, and define a clear, common vision for RBSE team members and customers. The document also provides the foundation for developing RBSE user and system requirements and a corresponding Program Management Plan. The concept is used to express the program mission to RBSE users and managers and to provide an exhibit for community review.
48 CFR 227.7108 - Contractor data repositories.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Technical Data 227.7108 Contractor data repositories. (a) Contractor data repositories may be established... procedures for protecting technical data delivered to or stored at the repository from unauthorized release... disclosure of technical data from the repository to third parties consistent with the Government's rights in...
Metadata mapping and reuse in caBIG™
Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis
2009-01-01
Background This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG™). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG™ framework or other frameworks that use metadata repositories. Results The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG™ framework and potentially any framework that uses a metadata repository. Conclusion This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG™. This effort contributes to facilitating the development of interoperable systems within caBIG™ as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies. PMID:19208192
Semantic framework for mapping object-oriented model to semantic web languages
Ježek, Petr; Mouček, Roman
2015-01-01
The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework. PMID:25762923
Semantic framework for mapping object-oriented model to semantic web languages.
Ježek, Petr; Mouček, Roman
2015-01-01
The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voizard, Patrice; Mayer, Stefan; Ouzounian, Gerald
Over the past 15 years, the French program on deep geologic disposal of high level and long-lived radioactive waste has benefited from a clear legal framework as the result of the December 30, 1991 French Waste Act. To fulfil its obligations stipulated in this law, ANDRA has submitted the 'Dossier 2005 Argile' (clay) and 'Dossier 2005 Granite' to the French Government. The first of those reports presents a concept for the underground disposal of nuclear waste at a specific clay site and focuses on a feasibility study. Knowledge of the host rock characteristics is based on the investigations carried outmore » at the Meuse/Haute Marne Underground Research Laboratory. The repository concept addresses various issues, the most important of which relates to the large amount of waste, the clay host rock and the reversibility requirement. This phase has ended upon review and evaluation of the 'Dossier 2005' made by different organisations including the National Review Board, the National Safety Authority and the NEA International Review Team. By passing the 'new', June 28, 2006 Planning Act on the sustainable management of radioactive materials and waste, the French parliament has further defined a clear legal framework for future work. This June 28 Planning Act thus sets a schedule and defines the objectives for the next phase of repository design in requesting the submission of a construction authorization application by 2015. The law calls for the repository program to be in a position to commission disposal installations by 2025. (authors)« less
Repository-Based Software Engineering Program: Working Program Management Plan
NASA Technical Reports Server (NTRS)
1993-01-01
Repository-Based Software Engineering Program (RBSE) is a National Aeronautics and Space Administration (NASA) sponsored program dedicated to introducing and supporting common, effective approaches to software engineering practices. The process of conceiving, designing, building, and maintaining software systems by using existing software assets that are stored in a specialized operational reuse library or repository, accessible to system designers, is the foundation of the program. In addition to operating a software repository, RBSE promotes (1) software engineering technology transfer, (2) academic and instructional support of reuse programs, (3) the use of common software engineering standards and practices, (4) software reuse technology research, and (5) interoperability between reuse libraries. This Program Management Plan (PMP) is intended to communicate program goals and objectives, describe major work areas, and define a management report and control process. This process will assist the Program Manager, University of Houston at Clear Lake (UHCL) in tracking work progress and describing major program activities to NASA management. The goal of this PMP is to make managing the RBSE program a relatively easy process that improves the work of all team members. The PMP describes work areas addressed and work efforts being accomplished by the program; however, it is not intended as a complete description of the program. Its focus is on providing management tools and management processes for monitoring, evaluating, and administering the program; and it includes schedules for charting milestones and deliveries of program products. The PMP was developed by soliciting and obtaining guidance from appropriate program participants, analyzing program management guidance, and reviewing related program management documents.
Couderc, Jean-Philippe
2010-01-01
The sharing of scientific data reinforces open scientific inquiry; it encourages diversity of analysis and opinion while promoting new research and facilitating the education of next generations of scientists. In this article, we present an initiative for the development of a repository containing continuous electrocardiographic information and their associated clinical information. This information is shared with the worldwide scientific community in order to improve quantitative electrocardiology and cardiac safety. First, we present the objectives of the initiative and its mission. Then, we describe the resources available in this initiative following three components: data, expertise and tools. The Data available in the Telemetric and Holter ECG Warehouse (THEW) includes continuous ECG signals and associated clinical information. The initiative attracted various academic and private partners whom expertise covers a large list of research arenas related to quantitative electrocardiography; their contribution to the THEW promotes cross-fertilization of scientific knowledge, resources, and ideas that will advance the field of quantitative electrocardiography. Finally, the tools of the THEW include software and servers to access and review the data available in the repository. To conclude, the THEW is an initiative developed to benefit the scientific community and to advance the field of quantitative electrocardiography and cardiac safety. It is a new repository designed to complement the existing ones such as Physionet, the AHA-BIH Arrhythmia Database, and the CSE database. The THEW hosts unique datasets from clinical trials and drug safety studies that, so far, were not available to the worldwide scientific community. PMID:20863512
SHIWA Services for Workflow Creation and Sharing in Hydrometeorolog
NASA Astrophysics Data System (ADS)
Terstyanszky, Gabor; Kiss, Tamas; Kacsuk, Peter; Sipos, Gergely
2014-05-01
Researchers want to run scientific experiments on Distributed Computing Infrastructures (DCI) to access large pools of resources and services. To run these experiments requires specific expertise that they may not have. Workflows can hide resources and services as a virtualisation layer providing a user interface that researchers can use. There are many scientific workflow systems but they are not interoperable. To learn a workflow system and create workflows may require significant efforts. Considering these efforts it is not reasonable to expect that researchers will learn new workflow systems if they want to run workflows developed in other workflow systems. To overcome it requires creating workflow interoperability solutions to allow workflow sharing. The FP7 'Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs' (SHIWA) project developed the Coarse-Grained Interoperability concept (CGI). It enables recycling and sharing workflows of different workflow systems and executing them on different DCIs. SHIWA developed the SHIWA Simulation Platform (SSP) to implement the CGI concept integrating three major components: the SHIWA Science Gateway, the workflow engines supported by the CGI concept and DCI resources where workflows are executed. The science gateway contains a portal, a submission service, a workflow repository and a proxy server to support the whole workflow life-cycle. The SHIWA Portal allows workflow creation, configuration, execution and monitoring through a Graphical User Interface using the WS-PGRADE workflow system as the host workflow system. The SHIWA Repository stores the formal description of workflows and workflow engines plus executables and data needed to execute them. It offers a wide-range of browse and search operations. To support non-native workflow execution the SHIWA Submission Service imports the workflow and workflow engine from the SHIWA Repository. This service either invokes locally or remotely pre-deployed workflow engines or submits workflow engines with the workflow to local or remote resources to execute workflows. The SHIWA Proxy Server manages certificates needed to execute the workflows on different DCIs. Currently SSP supports sharing of ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflows. Further workflow systems can be added to the simulation platform as required by research communities. The FP7 'Building a European Research Community through Interoperable Workflows and Data' (ER-flow) project disseminates the achievements of the SHIWA project to build workflow user communities across Europe. ER-flow provides application supports to research communities within (Astrophysics, Computational Chemistry, Heliophysics and Life Sciences) and beyond (Hydrometeorology and Seismology) to develop, share and run workflows through the simulation platform. The simulation platform supports four usage scenarios: creating and publishing workflows in the repository, searching and selecting workflows in the repository, executing non-native workflows and creating and running meta-workflows. The presentation will outline the CGI concept, the SHIWA Simulation Platform, the ER-flow usage scenarios and how the Hydrometeorology research community runs simulations on SSP.
McHugh, Seamus Mark; Corrigan, Mark; Dimitrov, Borislav; Cowman, Seamus; Tierney, Sean; Humphreys, Hilary; Hill, Arnold
2010-01-01
Surgical site infection accounts for 20% of all health care-associated infections (HCAIs); however, a program incorporating the education of surgeons has yet to be established across the specialty. An audit of surgical practice in infection prevention was carried out in Beaumont Hospital from July to November 2009. An educational Web site was developed targeting deficiencies highlighted in the audit. Interactive clinical cases were constructed using PHP coding, an HTML-embedded language, and then linked to a MySQL relational database. PowerPoint tutorials were produced as online Flash audiovisual movies. An online repository of streaming videos demonstrating best practice was made available, and weekly podcasts were made available on the iTunes© store for free download. Usage of the e-learning program was assessed quantitatively over 6 weeks in May and June 2010 using the commercial company Hitslink. During the 5-month audit, deficiencies in practice were highlighted, including the timing of surgical prophylaxis (33% noncompliance) and intravascular catheter care in surgical patients (38% noncompliance regarding necessity). Over the 6-week assessment of the educational material, the SurgInfection.com Web pages were accessed more than 8000 times; 77.9% of the visitors were from Ireland. The most commonly accessed modality was the repository with interactive clinical cases, accounting for 3463 (43%) of the Web site visits. The average user spent 57 minutes per visit, with 30% of them visiting the Web site multiple times. Interactive virtual cases mirroring real-life clinical scenarios are likely to be successful as an e-learning modality. User-friendly interfaces and 24-hour accessibility will increases uptake by surgical trainees.
17 CFR 49.12 - Swap data repository recordkeeping requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Swap data repository... COMMISSION SWAP DATA REPOSITORIES § 49.12 Swap data repository recordkeeping requirements. (a) A registered swap data repository shall maintain its books and records in accordance with the requirements of part...
17 CFR 49.12 - Swap data repository recordkeeping requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 17 Commodity and Securities Exchanges 1 2013-04-01 2013-04-01 false Swap data repository... COMMISSION SWAP DATA REPOSITORIES § 49.12 Swap data repository recordkeeping requirements. (a) A registered swap data repository shall maintain its books and records in accordance with the requirements of part...
17 CFR 49.12 - Swap data repository recordkeeping requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 17 Commodity and Securities Exchanges 2 2014-04-01 2014-04-01 false Swap data repository... COMMISSION (CONTINUED) SWAP DATA REPOSITORIES § 49.12 Swap data repository recordkeeping requirements. (a) A registered swap data repository shall maintain its books and records in accordance with the requirements of...
Code of Federal Regulations, 2010 CFR
2010-01-01
... geologic repository operations area. 63.112 Section 63.112 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical... repository operations area. The preclosure safety analysis of the geologic repository operations area must...
Managing and Evaluating Digital Repositories
ERIC Educational Resources Information Center
Zuccala, Alesia; Oppenheim, Charles; Dhiensa, Rajveen
2008-01-01
Introduction: We examine the role of the digital repository manager, discuss the future of repository management and evaluation and suggest that library and information science schools develop new repository management curricula. Method: Face-to-face interviews were carried out with managers of five different types of repositories and a Web-based…
jPOSTrepo: an international standard data repository for proteomes
Okuda, Shujiro; Watanabe, Yu; Moriya, Yuki; Kawano, Shin; Yamamoto, Tadashi; Matsumoto, Masaki; Takami, Tomoyo; Kobayashi, Daiki; Araki, Norie; Yoshizawa, Akiyasu C.; Tabata, Tsuyoshi; Sugiyama, Naoyuki; Goto, Susumu; Ishihama, Yasushi
2017-01-01
Major advancements have recently been made in mass spectrometry-based proteomics, yielding an increasing number of datasets from various proteomics projects worldwide. In order to facilitate the sharing and reuse of promising datasets, it is important to construct appropriate, high-quality public data repositories. jPOSTrepo (https://repository.jpostdb.org/) has successfully implemented several unique features, including high-speed file uploading, flexible file management and easy-to-use interfaces. This repository has been launched as a public repository containing various proteomic datasets and is available for researchers worldwide. In addition, our repository has joined the ProteomeXchange consortium, which includes the most popular public repositories such as PRIDE in Europe for MS/MS datasets and PASSEL for SRM datasets in the USA. Later MassIVE was introduced in the USA and accepted into the ProteomeXchange, as was our repository in July 2016, providing important datasets from Asia/Oceania. Accordingly, this repository thus contributes to a global alliance to share and store all datasets from a wide variety of proteomics experiments. Thus, the repository is expected to become a major repository, particularly for data collected in the Asia/Oceania region. PMID:27899654
Review of waste package verification tests. Semiannual report, October 1982-March 1983
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soo, P.
1983-08-01
The current study is part of an ongoing task to specify tests that may be used to verify that engineered waste package/repository systems comply with NRC radionuclide containment and controlled release performance objectives. Work covered in this report analyzes verification tests for borosilicate glass waste forms and bentonite- and zeolite-based packing mateials (discrete backfills). 76 references.
Save medical personnel's time by improved user interfaces.
Kindler, H
1997-01-01
Common objectives in the industrial countries are the improvement of quality of care, clinical effectiveness, and cost control. Cost control, in particular, has been addressed through the introduction of case mix systems for reimbursement by social-security institutions. More data is required to enable quality improvement, increases in clinical effectiveness and for juridical reasons. At first glance, this documentation effort is contradictory to cost reduction. However, integrated services for resource management based on better documentation should help to reduce costs. The clerical effort for documentation should be decreased by providing a co-operative working environment for healthcare professionals applying sophisticated human-computer interface technology. Additional services, e.g., automatic report generation, increase the efficiency of healthcare personnel. Modelling the medical work flow forms an essential prerequisite for integrated resource management services and for co-operative user interfaces. A user interface aware of the work flow provides intelligent assistance by offering the appropriate tools at the right moment. Nowadays there is a trend to client/server systems with relational databases or object-oriented databases as repository. The work flows used for controlling purposes and to steer the user interfaces must be represented in the repository.
TeleMed: An example of a new system developed with object technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forslund, D.; Phillips, R.; Tomlinson, B.
1996-12-01
Los Alamos National Laboratory has developed a virtual patient record system called TeleMed which is based on a distributed national radiographic and patient record repository located throughout the country. Without leaving their offices, participating doctors can view clinical drug and radiographic data via a sophisticated multimedia interface. For example, a doctor can match a patient`s radiographic information with the data in the repository, review treatment history and success, and then determine the best treatment. Furthermore, the features of TeleMed that make it attractive to clinicians and diagnosticians make it valuable for teaching and presentation as well. Thus, a resident canmore » use TeleMed for self-training in diagnostic techniques and a physician can use it to explain to a patient the course of their illness. In fact, the data can be viewed simultaneously by users at two or more distant locations for consultation with specialists in different fields. This capability is of enormous value to a wide spectrum of healthcare providers. It is made possible by the integration of multimedia information using commercial CORBA technology linking object-enabled databases with client interfaces using a three-tiered architecture.« less
Mere exposure alters category learning of novel objects.
Folstein, Jonathan R; Gauthier, Isabel; Palmeri, Thomas J
2010-01-01
We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning.
Mere Exposure Alters Category Learning of Novel Objects
Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.
2010-01-01
We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning. PMID:21833209
Bayesian molecular design with a chemical language model
NASA Astrophysics Data System (ADS)
Ikebata, Hisaki; Hongo, Kenta; Isomura, Tetsu; Maezono, Ryo; Yoshida, Ryo
2017-04-01
The aim of computational molecular design is the identification of promising hypothetical molecules with a predefined set of desired properties. We address the issue of accelerating the material discovery with state-of-the-art machine learning techniques. The method involves two different types of prediction; the forward and backward predictions. The objective of the forward prediction is to create a set of machine learning models on various properties of a given molecule. Inverting the trained forward models through Bayes' law, we derive a posterior distribution for the backward prediction, which is conditioned by a desired property requirement. Exploring high-probability regions of the posterior with a sequential Monte Carlo technique, molecules that exhibit the desired properties can computationally be created. One major difficulty in the computational creation of molecules is the exclusion of the occurrence of chemically unfavorable structures. To circumvent this issue, we derive a chemical language model that acquires commonly occurring patterns of chemical fragments through natural language processing of ASCII strings of existing compounds, which follow the SMILES chemical language notation. In the backward prediction, the trained language model is used to refine chemical strings such that the properties of the resulting structures fall within the desired property region while chemically unfavorable structures are successfully removed. The present method is demonstrated through the design of small organic molecules with the property requirements on HOMO-LUMO gap and internal energy. The R package iqspr is available at the CRAN repository.
Bayesian molecular design with a chemical language model.
Ikebata, Hisaki; Hongo, Kenta; Isomura, Tetsu; Maezono, Ryo; Yoshida, Ryo
2017-04-01
The aim of computational molecular design is the identification of promising hypothetical molecules with a predefined set of desired properties. We address the issue of accelerating the material discovery with state-of-the-art machine learning techniques. The method involves two different types of prediction; the forward and backward predictions. The objective of the forward prediction is to create a set of machine learning models on various properties of a given molecule. Inverting the trained forward models through Bayes' law, we derive a posterior distribution for the backward prediction, which is conditioned by a desired property requirement. Exploring high-probability regions of the posterior with a sequential Monte Carlo technique, molecules that exhibit the desired properties can computationally be created. One major difficulty in the computational creation of molecules is the exclusion of the occurrence of chemically unfavorable structures. To circumvent this issue, we derive a chemical language model that acquires commonly occurring patterns of chemical fragments through natural language processing of ASCII strings of existing compounds, which follow the SMILES chemical language notation. In the backward prediction, the trained language model is used to refine chemical strings such that the properties of the resulting structures fall within the desired property region while chemically unfavorable structures are successfully removed. The present method is demonstrated through the design of small organic molecules with the property requirements on HOMO-LUMO gap and internal energy. The R package iqspr is available at the CRAN repository.
From event analysis to global lessons: disaster forensics for building resilience
NASA Astrophysics Data System (ADS)
Keating, Adriana; Venkateswaran, Kanmani; Szoenyi, Michael; MacClune, Karen; Mechler, Reinhard
2016-04-01
With unprecedented growth in disaster risk, there is an urgent need for enhanced learning about and understanding disasters, particularly in relation to the trends in the drivers of increasing risk. Building on the disaster forensics field, we introduce the Post Event Review Capability (PERC) methodology for systematically and holistically analyzing disaster events, and identifying actionable recommendations. PERC responds to a need for learning about the successes and failures in disaster risk management and resilience, and uncovers the underlying drivers of increasing risk. We draw generalizable insights identified from seven applications of the methodology to date, where we find that across the globe policy makers and practitioners in disaster risk management face strikingly similar challenges despite variations in context, indicating encouraging potential for mutual learning. These lessons highlight the importance of integrated risk reduction strategies. We invite others to utilize the freely available PERC approach and contribute to building a repository of learnings on disaster risk management and resilience.
From event analysis to global lessons: disaster forensics for building resilience
NASA Astrophysics Data System (ADS)
Keating, Adriana; Venkateswaran, Kanmani; Szoenyi, Michael; MacClune, Karen; Mechler, Reinhard
2016-07-01
With unprecedented growth in disaster risk, there is an urgent need for enhanced learning and understanding of disasters, particularly in relation to the trends in drivers of increasing risk. Building on the disaster forensics field, we introduce the post-event review capability (PERC) methodology for systematically and holistically analysing disaster events, and identifying actionable recommendations. PERC responds to a need for learning about the successes and failures in disaster risk management and resilience, and uncovers the underlying drivers of increasing risk. We draw generalisable insights identified from seven applications of the methodology to date, where we find that across the globe policy makers and practitioners in disaster risk management face strikingly similar challenges despite variations in context, indicating encouraging potential for mutual learning. These lessons highlight the importance of integrated risk reduction strategies. We invite others to utilise the freely available PERC approach and contribute to building a repository of learning on disaster risk management and resilience.
The experiment editor: supporting inquiry-based learning with virtual labs
NASA Astrophysics Data System (ADS)
Galan, D.; Heradio, R.; de la Torre, L.; Dormido, S.; Esquembre, F.
2017-05-01
Inquiry-based learning is a pedagogical approach where students are motivated to pose their own questions when facing problems or scenarios. In physics learning, students are turned into scientists who carry out experiments, collect and analyze data, formulate and evaluate hypotheses, and so on. Lab experimentation is essential for inquiry-based learning, yet there is a drawback with traditional hands-on labs in the high costs associated with equipment, space, and maintenance staff. Virtual laboratories are helpful to reduce these costs. This paper enriches the virtual lab ecosystem by providing an integrated environment to automate experimentation tasks. In particular, our environment supports: (i) scripting and running experiments on virtual labs, and (ii) collecting and analyzing data from the experiments. The current implementation of our environment supports virtual labs created with the authoring tool Easy Java/Javascript Simulations. Since there are public repositories with hundreds of freely available labs created with this tool, the potential applicability to our environment is considerable.
Savoia, Elena; Agboola, Foluso; Biddinger, Paul D
2012-08-01
Many public health and healthcare organizations use formal knowledge management practices to identify and disseminate the experiences gained over time. The "lessons-learned" approach is one such example of knowledge management practice applied to the wider concept of organizational learning. In the field of emergency preparedness, the lessons-learned approach stands on the assumption that learning from experience improves practice and minimizes avoidable deaths and negative economic and social consequences of disasters. In this project, we performed a structured review of AARs to analyze how lessons learned from the response to real-incidents may be used to maximize knowledge management and quality improvement practices such as the design of public health emergency preparedness (PHEP) exercises. We chose as a source of data the "Lessons Learned Information Sharing (LLIS.gov)" system, a joined program of the U.S. Department of Homeland Security DHS and FEMA that serves as the national, online repository of lessons learned, best practices, and innovative ideas. We identified recurring challenges reported by various states and local public health agencies in the response to different types of incidents. We also strove to identify the limitations of systematic learning that can be achieved due to existing weaknesses in the way AARs are developed.
ArrayWiki: an enabling technology for sharing public microarray data repositories and meta-analyses
Stokes, Todd H; Torrance, JT; Li, Henry; Wang, May D
2008-01-01
Background A survey of microarray databases reveals that most of the repository contents and data models are heterogeneous (i.e., data obtained from different chip manufacturers), and that the repositories provide only basic biological keywords linking to PubMed. As a result, it is difficult to find datasets using research context or analysis parameters information beyond a few keywords. For example, to reduce the "curse-of-dimension" problem in microarray analysis, the number of samples is often increased by merging array data from different datasets. Knowing chip data parameters such as pre-processing steps (e.g., normalization, artefact removal, etc), and knowing any previous biological validation of the dataset is essential due to the heterogeneity of the data. However, most of the microarray repositories do not have meta-data information in the first place, and do not have a a mechanism to add or insert this information. Thus, there is a critical need to create "intelligent" microarray repositories that (1) enable update of meta-data with the raw array data, and (2) provide standardized archiving protocols to minimize bias from the raw data sources. Results To address the problems discussed, we have developed a community maintained system called ArrayWiki that unites disparate meta-data of microarray meta-experiments from multiple primary sources with four key features. First, ArrayWiki provides a user-friendly knowledge management interface in addition to a programmable interface using standards developed by Wikipedia. Second, ArrayWiki includes automated quality control processes (caCORRECT) and novel visualization methods (BioPNG, Gel Plots), which provide extra information about data quality unavailable in other microarray repositories. Third, it provides a user-curation capability through the familiar Wiki interface. Fourth, ArrayWiki provides users with simple text-based searches across all experiment meta-data, and exposes data to search engine crawlers (Semantic Agents) such as Google to further enhance data discovery. Conclusions Microarray data and meta information in ArrayWiki are distributed and visualized using a novel and compact data storage format, BioPNG. Also, they are open to the research community for curation, modification, and contribution. By making a small investment of time to learn the syntax and structure common to all sites running MediaWiki software, domain scientists and practioners can all contribute to make better use of microarray technologies in research and medical practices. ArrayWiki is available at . PMID:18541053
2012-02-29
surface and Swiss roll) and real-world data sets (UCI Machine Learning Repository [12] and USPS digit handwriting data). In our experiments, we use...less than µn ( say µ = 0.8), we can first use screening technique to select µn candidate nodes, and then apply BIPS on them for further selection and...identified from node j to node i. So we can say the probability for the existence of this connection is approximately 82%. Given the probability matrix
Analysis of the Navy’s Humanitarian Assistance and Disaster Relief Program Performance
2014-12-01
mortar and wood supports. (1) U.S. Government Response Shortly after the earthquake the president of Pakistan, President Musharraf made a formal...complicating coordination efforts. 3. Lessons Learned The USN has created and recently updated an online system for use as a repository of after action...I guess the military could somehow post online a list of projects they are doing and also put up a list of projects they want groups to do. This way
Software Development Cost Estimation Executive Summary
NASA Technical Reports Server (NTRS)
Hihn, Jairus M.; Menzies, Tim
2006-01-01
Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.
Mei, Jiangyuan; Liu, Meizhu; Wang, Yuan-Fang; Gao, Huijun
2016-06-01
Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.
An application of machine learning to the organization of institutional software repositories
NASA Technical Reports Server (NTRS)
Bailin, Sidney; Henderson, Scott; Truszkowski, Walt
1993-01-01
Software reuse has become a major goal in the development of space systems, as a recent NASA-wide workshop on the subject made clear. The Data Systems Technology Division of Goddard Space Flight Center has been working on tools and techniques for promoting reuse, in particular in the development of satellite ground support software. One of these tools is the Experiment in Libraries via Incremental Schemata and Cobweb (ElvisC). ElvisC applies machine learning to the problem of organizing a reusable software component library for efficient and reliable retrieval. In this paper we describe the background factors that have motivated this work, present the design of the system, and evaluate the results of its application.
Transportation plan repository and archive.
DOT National Transportation Integrated Search
2011-04-01
This project created a repository and archive for transportation planning documents in Texas within the : established Texas A&M Repository (http://digital.library.tamu.edu). This transportation planning archive : and repository provides ready access ...
Werner, Kent; Bosson, Emma; Berglund, Sten
2006-12-01
Safety assessment related to the siting of a geological repository for spent nuclear fuel deep in the bedrock requires identification of potential flow paths and the associated travel times for radionuclides originating at repository depth. Using the Laxemar candidate site in Sweden as a case study, this paper describes modeling methodology, data integration, and the resulting water flow models, focusing on the Quaternary deposits and the upper 150 m of the bedrock. Example simulations identify flow paths to groundwater discharge areas and flow paths in the surface system. The majority of the simulated groundwater flow paths end up in the main surface waters and along the coastline, even though the particles used to trace the flow paths are introduced with a uniform spatial distribution at a relatively shallow depth. The calculated groundwater travel time, determining the time available for decay and retention of radionuclides, is on average longer to the coastal bays than to other biosphere objects at the site. Further, it is demonstrated how GIS-based modeling can be used to limit the number of surface flow paths that need to be characterized for safety assessment. Based on the results, the paper discusses an approach for coupling the present models to a model for groundwater flow in the deep bedrock.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faybishenko, Boris; Birkholzer, Jens; Sassani, David
The overall objective of the Fifth Worldwide Review (WWR-5) is to document the current state-of-the-art of major developments in a number of nations throughout the World pursuing geological disposal programs, and to summarize challenging problems and experience that have been obtained in siting, preparing and reviewing cases for the operational and long-term safety of proposed and operating nuclear waste repositories. The scope of the Review is to address current specific technical issues and challenges in safety case development along with the interplay of technical feasibility, siting, engineering design issues, and operational and post-closure safety. In particular, the chapters included inmore » the report present the following types of information: the current status of the deep geological repository programs for high level nuclear waste and low- and intermediate level nuclear waste in each country, concepts of siting and radioactive waste and spent nuclear fuel management in different countries (with the emphasis of nuclear waste disposal under different climatic conditions and different geological formations), progress in repository site selection and site characterization, technology development, buffer/backfill materials studies and testing, support activities, programs, and projects, international cooperation, and future plans, as well as regulatory issues and transboundary problems.« less
Springate, David A; Kontopantelis, Evangelos; Ashcroft, Darren M; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David
2014-01-01
Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects.
Springate, David A.; Kontopantelis, Evangelos; Ashcroft, Darren M.; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David
2014-01-01
Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects. PMID:24941260
Supporting multiple domains in a single reuse repository
NASA Technical Reports Server (NTRS)
Eichmann, David A.
1992-01-01
Domain analysis typically results in the construction of a domain-specific repository. Such a repository imposes artificial boundaries on the sharing of similar assets between related domains. A lattice-based approach to repository modeling can preserve a reuser's domain specific view of the repository, while avoiding replication of commonly used assets and supporting a more general perspective on domain interrelationships.
Digital Library Storage using iRODS Data Grids
NASA Astrophysics Data System (ADS)
Hedges, Mark; Blanke, Tobias; Hasan, Adil
Digital repository software provides a powerful and flexible infrastructure for managing and delivering complex digital resources and metadata. However, issues can arise in managing the very large, distributed data files that may constitute these resources. This paper describes an implementation approach that combines the Fedora digital repository software with a storage layer implemented as a data grid, using the iRODS middleware developed by DICE (Data Intensive Cyber Environments) as the successor to SRB. This approach allows us to use Fedoras flexible architecture to manage the structure of resources and to provide application- layer services to users. The grid-based storage layer provides efficient support for managing and processing the underlying distributed data objects, which may be very large (e.g. audio-visual material). The Rule Engine built into iRODS is used to integrate complex workflows at the data level that need not be visible to users, e.g. digital preservation functionality.
Ubiquitous-Severance Hospital Project: Implementation and Results
Chang, Bung-Chul; Kim, Young-A; Kim, Jee Hea; Jung, Hae Kyung; Kang, Eun Hae; Kang, Hee Suk; Lee, Hyung Il; Kim, Yong Ook; Yoo, Sun Kook; Sunwoo, Ilnam; An, Seo Yong; Jeong, Hye Jeong
2010-01-01
Objectives The purpose of this study was to review an implementation of u-Severance information system with focus on electronic hospital records (EHR) and to suggest future improvements. Methods Clinical Data Repository (CDR) of u-Severance involved implementing electronic medical records (EMR) as the basis of EHR and the management of individual health records. EHR were implemented with service enhancements extending to the clinical decision support system (CDSS) and expanding the knowledge base for research with a repository for clinical data and medical care information. Results The EMR system of Yonsei University Health Systems (YUHS) consists of HP integrity superdome servers using MS SQL as a database management system and MS Windows as its operating system. Conclusions YUHS is a high-performing medical institution with regards to efficient management and customer satisfaction; however, after 5 years of implementation of u-Severance system, several limitations with regards to expandability and security have been identified. PMID:21818425
M4SF-17LL010301071: Thermodynamic Database Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavarin, M.; Wolery, T. J.
2017-09-05
This progress report (Level 4 Milestone Number M4SF-17LL010301071) summarizes research conducted at Lawrence Livermore National Laboratory (LLNL) within the Argillite Disposal R&D Work Package Number M4SF-17LL01030107. The DR Argillite Disposal R&D control account is focused on the evaluation of important processes in the analysis of disposal design concepts and related materials for nuclear fuel disposal in clay-bearing repository media. The objectives of this work package are to develop model tools for evaluating impacts of THMC process on long-term disposal of spent fuel in argillite rocks, and to establish the scientific basis for high thermal limits. This work is contributing tomore » the GDSA model activities to identify gaps, develop process models, provide parameter feeds and support requirements providing the capability for a robust repository performance assessment model by 2020.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, Lydia Ilaiza, E-mail: lydiailaiza@gmail.com; Ryong, Kim Tae
The whole cycle of the decommissioning process development of repository requires the relevant bodies to have a financial system to ensure that it has sufficient funds for its whole life cycle (over periods of many decades). Therefore, the financing mechanism and management system shall respect the following status: the national position, institutional and legislative environment, technical capabilities, the waste origin, ownership, characteristics and inventories. The main objective of the studies is to focus on the cost considerations, alternative funding managements and mechanisms, technical and non-technical factors that may affect the repository life-cycle costs. As a conclusion, the outcomes of thismore » paper is to make a good recommendation and could be applied to the national planners, regulatory body, engineers, or the managers, to form a financial management plan for the decommissioning of the Nuclear Installation.« less
A Routing Mechanism for Cloud Outsourcing of Medical Imaging Repositories.
Godinho, Tiago Marques; Viana-Ferreira, Carlos; Bastião Silva, Luís A; Costa, Carlos
2016-01-01
Web-based technologies have been increasingly used in picture archive and communication systems (PACS), in services related to storage, distribution, and visualization of medical images. Nowadays, many healthcare institutions are outsourcing their repositories to the cloud. However, managing communications between multiple geo-distributed locations is still challenging due to the complexity of dealing with huge volumes of data and bandwidth requirements. Moreover, standard methodologies still do not take full advantage of outsourced archives, namely because their integration with other in-house solutions is troublesome. In order to improve the performance of distributed medical imaging networks, a smart routing mechanism was developed. This includes an innovative cache system based on splitting and dynamic management of digital imaging and communications in medicine objects. The proposed solution was successfully deployed in a regional PACS archive. The results obtained proved that it is better than conventional approaches, as it reduces remote access latency and also the required cache storage space.
JavaScript Access to DICOM Network and Objects in Web Browser.
Drnasin, Ivan; Grgić, Mislav; Gogić, Goran
2017-10-01
Digital imaging and communications in medicine (DICOM) 3.0 standard provides the baseline for the picture archiving and communication systems (PACS). The development of Internet and various communication media initiated demand for non-DICOM access to PACS systems. Ever-increasing utilization of the web browsers, laptops and handheld devices, as opposed to desktop applications and static organizational computers, lead to development of different web technologies. The DICOM standard officials accepted those subsequently as tools of alternative access. This paper provides an overview of the current state of development of the web access technology to the DICOM repositories. It presents a different approach of using HTML5 features of the web browsers through the JavaScript language and the WebSocket protocol by enabling real-time communication with DICOM repositories. JavaScript DICOM network library, DICOM to WebSocket proxy and a proof-of-concept web application that qualifies as a DICOM 3.0 device were developed.
The development of digital library system for drug research information.
Kim, H J; Kim, S R; Yoo, D S; Lee, S H; Suh, O K; Cho, J H; Shin, H T; Yoon, J P
1998-01-01
The sophistication of computer technology and information transmission on internet has made various cyber information repository available to information consumers. In the era of information super-highway, the digital library which can be accessed from remote sites at any time is considered the prototype of information repository. Using object-oriented DBMS, the very first model of digital library for pharmaceutical researchers and related professionals in Korea has been developed. The published research papers and researchers' personal information was included in the database. For database with research papers, 13 domestic journals were abstracted and scanned for full-text image files which can be viewed by Internet web browsers. The database with researchers' personal information was also developed and interlinked to the database with research papers. These database will be continuously updated and will be combined with world-wide information as the unique digital library in the field of pharmacy.
Savoia, Elena; Agboola, Foluso; Biddinger, Paul D.
2012-01-01
Many public health and healthcare organizations use formal knowledge management practices to identify and disseminate the experiences gained over time. The “lessons-learned” approach is one such example of knowledge management practice applied to the wider concept of organizational learning. In the field of emergency preparedness, the lessons-learned approach stands on the assumption that learning from experience improves practice and minimizes avoidable deaths and negative economic and social consequences of disasters. In this project, we performed a structured review of AARs to analyze how lessons learned from the response to real-incidents may be used to maximize knowledge management and quality improvement practices such as the design of public health emergency preparedness (PHEP) exercises. We chose as a source of data the “Lessons Learned Information Sharing (LLIS.gov)” system, a joined program of the U.S. Department of Homeland Security DHS and FEMA that serves as the national, online repository of lessons learned, best practices, and innovative ideas. We identified recurring challenges reported by various states and local public health agencies in the response to different types of incidents. We also strived to identify the limitations of systematic learning that can be achieved due to existing weaknesses in the way AARs are developed. PMID:23066408
NCI Mouse Repository | FNLCR Staging
The NCI Mouse Repository is an NCI-funded resource for mouse cancer models and associated strains. The repository makes strains available to all members of the scientific community (academic, non-profit, and commercial). NCI Mouse Repository strains
GENESI-DR - A single access point to Earth Science data
NASA Astrophysics Data System (ADS)
Cossu, R.; Goncalves, P.; Pacini, F.
2009-04-01
The amount of information being generated about our planet is increasing at an exponential rate, but it must be easily accessible in order to apply it to the global needs relating to the state of the Earth. Currently, information about the state of the Earth, relevant services, analysis results, applications and tools are accessible in a very scattered and uncoordinated way, often through individual initiatives from Earth Observation mission operators, scientific institutes dealing with ground measurements, service companies, data catalogues, etc. A dedicated infrastructure providing transparent access to all this will support Earth Science communities by allowing them to easily and quickly derive objective information and share knowledge based on all environmentally sensitive domains. The use of high-speed networks (GÉANT) and the experimentation of new technologies, like BitTorrent, will also contribute to better services for the Earth Science communities. GENESI-DR (Ground European Network for Earth Science Interoperations - Digital Repositories), an ESA-led, European Commission (EC)-funded two-year project, is taking the lead in providing reliable, easy, long-term access to Earth Science data via the Internet. This project will allow scientists from different Earth Science disciplines located across Europe to locate, access, combine and integrate historical and fresh Earth-related data from space, airborne and in-situ sensors archived in large distributed repositories. GENESI-DR builds a federated collection of heterogeneous digital Earth Science repositories to establish a dedicated infrastructure providing transparent access to all this and allowing Earth Science communities to easily and quickly derive objective information and share knowledge based on all environmentally sensitive domains. The federated digital repositories, seen as services and data providers, will share access to their resources (catalogue functions, data access, processing services etc.) and will adhere to a common set of standards / policies / interfaces. The end-users will be provided with a virtual collection of digital Earth Science data, irrespectively of their location in the various single federated repositories. GENESI-DR objectives have lead to the identification of the basic GENESI-DR infrastructure requirements: • Capability, for Earth Science users, to discover data from different European Earth Science Digital Repositories through the same interface in a transparent and homogeneous way; • Easiness and speed of access to large volumes of coherently maintained distributed data in an effective and timely way; • Capability, for DR owners, to easily make available their data to a significantly increased audience with no need to duplicate them in a different storage system. Data discovery is based on a Central Discovery Service, which allows users and applications to easily query information about data collections and products existing in heterogeneous catalogues, at federated DR sites. This service can be accessed by users via web interface, the GENESI-DR Web Portal, or by external applications via open standardized interfaces exposed by the system. The Central Discovery Service identifies the DRs providing products complying with the user search criteria and returns the corresponding access points to the requester. By taking into consideration different and efficient data transfer technologies such as HTTPS, GridFTP and BitTorrent, the infrastructure provides easiness and speed of access. Conversely, for data publishing GENESI-DR provides several mechanisms to assist DR owners in producing a metadata catalogues. In order to reach its objectives, the GENESI-DR e-Infrastructure will be validated against user needs for accessing and sharing Earth Science data. Initially, four specific applications in the land, atmosphere and marine domains have been selected, including: • Near real time orthorectification for agricultural crops monitoring • Urban area mapping in support of emergency response • Data assimilation in GlobModel, addressing major environmental and health issues in Europe, with a particular focus on air quality • SeaDataNet to aid environmental assessments and to forecast the physical state of the oceans in near real time. Other applications will complement this during the second half of the project. GENESI-DR also aims to develop common approaches to preserve the historical archives and the ability to access the derived user information as both software and hardware transformations occur. Ensuring access to Earth Science data for future generations is of utmost importance because it allows for the continuity of knowledge generation improvement. For instance, scientists accessing today's climate change data in 50 years will be able to better understand and detect trends in global warming and apply this knowledge to ongoing natural phenomena. GENESI-DR will work towards harmonising operations and applying approved standards, policies and interfaces at key Earth Science data repositories. To help with this undertaking, GENESI-DR will establish links with the relevant organisations and programmes such as space agencies, institutional environmental programmes, international Earth Science programmes and standardisation bodies.
Albar, Juan Pablo; Binz, Pierre-Alain; Eisenacher, Martin; Jones, Andrew R; Mayer, Gerhard; Omenn, Gilbert S; Orchard, Sandra; Vizcaíno, Juan Antonio; Hermjakob, Henning
2015-01-01
Objective To describe the goals of the Proteomics Standards Initiative (PSI) of the Human Proteome Organization, the methods that the PSI has employed to create data standards, the resulting output of the PSI, lessons learned from the PSI’s evolution, and future directions and synergies for the group. Materials and Methods The PSI has 5 categories of deliverables that have guided the group. These are minimum information guidelines, data formats, controlled vocabularies, resources and software tools, and dissemination activities. These deliverables are produced via the leadership and working group organization of the initiative, driven by frequent workshops and ongoing communication within the working groups. Official standards are subjected to a rigorous document process that includes several levels of peer review prior to release. Results We have produced and published minimum information guidelines describing what information should be provided when making data public, either via public repositories or other means. The PSI has produced a series of standard formats covering mass spectrometer input, mass spectrometer output, results of informatics analysis (both qualitative and quantitative analyses), reports of molecular interaction data, and gel electrophoresis analyses. We have produced controlled vocabularies that ensure that concepts are uniformly annotated in the formats and engaged in extensive software development and dissemination efforts so that the standards can efficiently be used by the community. Conclusion In its first dozen years of operation, the PSI has produced many standards that have accelerated the field of proteomics by facilitating data exchange and deposition to data repositories. We look to the future to continue developing standards for new proteomics technologies and workflows and mechanisms for integration with other omics data types. Our products facilitate the translation of genomics and proteomics findings to clinical and biological phenotypes. The PSI website can be accessed at http://www.psidev.info. PMID:25726569
Code of Federal Regulations, 2013 CFR
2013-01-01
... of the deep dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose equivalent may not exceed 0.15 Sv (15... TEDE (hereafter referred to as “dose”) to any real member of the public located beyond the boundary of...
Code of Federal Regulations, 2012 CFR
2012-01-01
... of the deep dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose equivalent may not exceed 0.15 Sv (15... TEDE (hereafter referred to as “dose”) to any real member of the public located beyond the boundary of...
Code of Federal Regulations, 2014 CFR
2014-01-01
... of the deep dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose equivalent may not exceed 0.15 Sv (15... TEDE (hereafter referred to as “dose”) to any real member of the public located beyond the boundary of...
Code of Federal Regulations, 2011 CFR
2011-01-01
... of the deep dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose equivalent may not exceed 0.15 Sv (15... TEDE (hereafter referred to as “dose”) to any real member of the public located beyond the boundary of...
Bomb Me: Trans/acting Subject into Object, an Installation for R.I. Simon and Angela Failler
ERIC Educational Resources Information Center
Saklikar, Renée Sarojini
2014-01-01
This installation is one of a series made and being made while the author writes a life-long poem chronicle, "thecanadaproject." The photographs were taken as the author sifted through her personal archive--a collection that is at once intimate and filled with fragments from a public repository: that of the bombing of an airplane in…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-13
... isolation in a deep geologic repository for spent fuel or high-level radioactive waste; (2) has had highly... in 10 CFR Part 61, Subpart C and pursuant to a State approved closure plan or State-issued permit; or... with the performance objectives of 10 CFR Part 61, Subpart C; pursuant to a State approved closure plan...
Simulator sickness research program at NASA-Ames Research Center
NASA Technical Reports Server (NTRS)
Mccauley, Michael E.; Cook, Anthony M.
1987-01-01
The simulator sickness syndrome is receiving increased attention in the simulation community. NASA-Ames Research Center has initiated a program to facilitate the exchange of information on this topic among the tri-services and other interested government organizations. The program objectives are to identify priority research issues, promote efficient research strategies, serve as a repository of information, and disseminate information to simulator users.
Can YouTube enhance student nurse learning?
Clifton, Andrew; Mann, Claire
2011-05-01
The delivery of nurse education has changed radically in the past two decades. Increasingly, nurse educators are using new technology in the classroom to enhance their teaching and learning. One recent technological development to emerge is the user-generated content website YouTube. Originally YouTube was used as a repository for sharing home-made videos, more recently online content is being generated by political parties, businesses and educationalists. We recently delivered a module to undergraduate student nurses in which the teaching and learning were highly populated with YouTube resources. We found that the use of YouTube videos increased student engagement, critical awareness and facilitated deep learning. Furthermore, these videos could be accessed at any time of the day and from a place to suit the student. We acknowledge that there are some constraints to using YouTube for teaching and learning particularly around the issue of unregulated content which is often misleading, inaccurate or biased. However, we strongly urge nurse educators to consider using YouTube for teaching and learning, in and outside the classroom, to a generation of students who are native of a rapidly changing digital world. Copyright © 2010 Elsevier Ltd. All rights reserved.
Monitored Geologic Repository Project Description Document
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. M. Curry
2001-01-30
The primary objective of the Monitored Geologic Repository Project Description Document (PDD) is to allocate the functions, requirements, and assumptions to the systems at Level 5 of the Civilian Radioactive Waste Management System (CRWMS) architecture identified in Section 4. It provides traceability of the requirements to those contained in Section 3 of the ''Monitored Geologic Repository Requirements Document'' (MGR RD) (YMP 2000a) and other higher-level requirements documents. In addition, the PDD allocates design related assumptions to work products of non-design organizations. The document provides Monitored Geologic Repository (MGR) technical requirements in support of design and performance assessment in preparing formore » the Site Recommendation (SR) and License Application (LA) milestones. The technical requirements documented in the PDD are to be captured in the System Description Documents (SDDs) which address each of the systems at Level 5 of the CRWMS architecture. The design engineers obtain the technical requirements from the SDDs and by reference from the SDDs to the PDD. The design organizations and other organizations will obtain design related assumptions directly from the PDD. These organizations may establish additional assumptions for their individual activities, but such assumptions are not to conflict with the assumptions in the PDD. The PDD will serve as the primary link between the technical requirements captured in the SDDs and the design requirements captured in US Department of Energy (DOE) documents. The approved PDD is placed under Level 3 baseline control by the CRWMS Management and Operating Contractor (M and O) and the following portions of the PDD constitute the Technical Design Baseline for the MGR: the design characteristics listed in Table 1-1, the MGR Architecture (Section 4.1), the Technical Requirements (Section 5), and the Controlled Project Assumptions (Section 6).« less
XDS in healthcare: Could it lead to a duplication problem? Field study from GVR Sweden
NASA Astrophysics Data System (ADS)
Wintell, M.; Lundberg, N.; Lindsköld, L.
2011-03-01
Managing different registries and repositories within healthcare regions grows the risk of having almost the same information but with different status and with different content. This is due to the fact that when medical information is created it's done in a dynamical process that will lead to that information will change its contents during lifetime within the "active" healthcare phase. The information needs to be easy accessible, being the platform for making the medical decisions transparent. In the Region Västra Götaland (VGR), Sweden, data is shared from 29 X-ray departments with different Picture Archive and Communication Systems (PACS) and Radiology Information Systems (RIS) systems through the Infobroker solution, that's acts as a broker between the actors involved. Request/reports from RIS are stored as DIgital COmmunication in Medicine (DICOM)-Structured Reports (SR) objects, together with the images. Every status change within this activities are updated within the Information Infrastructure based on Integrating the Healthcare Enterprise (IHE) mission. Cross-enterprise Document Sharing for Imaging (XDS-I) were the registry and the central repository are the components used for sharing medical documentation. The VGR strategy was not to apply one regional XDS-I registry and repository, instead VGR applied an Enterprise Architecture (EA) intertwined with the Information Infrastructure for the dynamic delivery to consumers. The upcoming usage of different Regional XDS registries and repositories could lead to new ways of carrying out shared work but it can also lead into "problems". XDS and XDS-I implemented without a strategy could lead to increased numbers of status/versions but also duplication of information in the Information Infrastructure.
NASA Astrophysics Data System (ADS)
Arko, Robert; Chandler, Cynthia; Stocks, Karen; Smith, Shawn; Clark, Paul; Shepherd, Adam; Moore, Carla; Beaulieu, Stace
2013-04-01
The Rolling Deck to Repository (R2R) program is developing infrastructure to ensure the underway sensor data from U.S. academic oceanographic research vessels are routinely and consistently documented, preserved in long-term archives, and disseminated to the science community. The entire R2R Catalog is published online as a Linked Data collection, making it easily accessible to encourage discovery and integration with data at other repositories. We are developing the R2R Linked Data collection with specific goals in mind: 1.) We facilitate data access and reuse by publishing the richest possible collection of resources to describe vessels, cruises, instruments, and datasets from the U.S. academic fleet, including data quality assessment results and clean trackline navigation; 2.) We facilitate data citation through the entire lifecycle from field acquisition to shoreside archiving to journal articles and global syntheses, by publishing Digital Object Identifiers (DOIs) for datasets and encoding them directly into our Linked Data resources; and 3.) We facilitate federation with other repositories such as the Biological and Chemical Oceanography Data Management Office (BCO-DMO), InterRidge Vents Database, and Index to Marine and Lacustrine Geological Samples (IMLGS), by reciprocal linking between RDF resources and supporting the RDF Query Language. R2R participates in the Ocean Data Interoperability Platform (ODIP), a joint European-U.S.-Australian partnership to facilitate the sharing of data and documentation across international borders. We publish our controlled vocabularies as a Simple Knowledge Organization System (SKOS) concept collection, and are working toward alignment with SeaDataNet and other community-standard terms using the NERC Vocabulary Server (NVS). http://rvdata.us/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersson, J.
1993-12-31
The Swedish Nuclear Power Inspectorate, SKI, regulatory research program has to prepare for the process of licensing a repository for spent nuclear fuel, by building up the necessary knowledge and review capacity. SKIs main strategy for meeting this demand is to develop an independent performance assessment capability. SKIs first own performance assessment project, Project-90, was completed in 1991 and is now followed by a new project, SITE-94. SITE-94 is based on conclusions reached within Project-90. An independent review of Project-90, carried out by a NEA team of experts, has also contributed to the formation of the project. Another important reasonmore » for the project is that the implementing organization in Sweden, SKB, has proposed to submit an application to start detailed investigation of a repository candidate site around 1997. SITE-94 is a performance assessment of a hypothetical repository at a real site. The main objective of the project is to determine how site specific data should be assimilated into the performance assessment process, and to evaluate how uncertainties inherent in site characterization will influence performance assessment results. This will be addressed by exploring multiple interpretations, conceptual models, and parameters consistent with the site data. The site evaluation will strive for consistency between geological, hydrological, rock mechanical, and geochemical descriptions. Other important elements of SITE-94 are the development of a practical and defensible methodology for defining, constructing and analyzing scenarios, the development of approaches for treatment of uncertainties, evaluation of canister integrity, and the development and application of an appropriate quality assurance plan for performance assessments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forsberg, C.; Miller, W.F.
2013-07-01
The historical repository siting strategy in the United States has been a top-down approach driven by federal government decision making but it has been a failure. This policy has led to dispatching fuel cycle facilities in different states. The U.S. government is now considering an alternative repository siting strategy based on voluntary agreements with state governments. If that occurs, state governments become key decision makers. They have different priorities. Those priorities may change the characteristics of the repository and the fuel cycle. State government priorities, when considering hosting a repository, are safety, financial incentives and jobs. It follows that statesmore » will demand that a repository be the center of the back end of the fuel cycle as a condition of hosting it. For example, states will push for collocation of transportation services, safeguards training, and navy/private SNF (Spent Nuclear Fuel) inspection at the repository site. Such activities would more than double local employment relative to what was planned for the Yucca Mountain-type repository. States may demand (1) the right to take future title of the SNF so if recycle became economic the reprocessing plant would be built at the repository site and (2) the right of a certain fraction of the repository capacity for foreign SNF. That would open the future option of leasing of fuel to foreign utilities with disposal of the SNF in the repository but with the state-government condition that the front-end fuel-cycle enrichment and fuel fabrication facilities be located in that state.« less
Intelligent Discovery for Learning Objects Using Semantic Web Technologies
ERIC Educational Resources Information Center
Hsu, I-Ching
2012-01-01
The concept of learning objects has been applied in the e-learning field to promote the accessibility, reusability, and interoperability of learning content. Learning Object Metadata (LOM) was developed to achieve these goals by describing learning objects in order to provide meaningful metadata. Unfortunately, the conventional LOM lacks the…
NCI Mouse Repository | Frederick National Laboratory for Cancer Research
The NCI Mouse Repository is an NCI-funded resource for mouse cancer models and associated strains. The repository makes strains available to all members of the scientific community (academic, non-profit, and commercial). NCI Mouse Repository strains
NASA Electronic Library System (NELS): The system impact of security
NASA Technical Reports Server (NTRS)
Mcgregor, Terry L.
1993-01-01
This paper discusses security issues as they relate to the NASA Electronic Library System which is currently in use as the repository system for AdaNET System Version 3 (ASV3) being operated by MountainNET, Inc. NELS was originally designed to provide for public, development, and secure collections and objects. The secure feature for collections and objects was deferred in the initial system for implementation at a later date. The NELS system is now 9 months old and many lessons have been learned about the use and maintenance of library systems. MountainNET has 9 months of experience in operating the system and gathering feedback from the ASV3 user community. The user community has expressed an interest in seeing security features implemented in the current system. The time has come to take another look at the whole issue of security for the NELS system. Two requirements involving security have been put forth by MountainNET for the ASV3 system. The first is to incorporate at the collection level a security scheme to allow restricted access to collections. This should be invisible to end users and be controlled by librarians. The second is to allow inclusion of applications which can be executed only by a controlled group of users; for example, an application which can be executed by librarians only. The requirements provide a broad framework in which to work. These requirements raise more questions than answers. To explore the impact of these requirements a top down approach will be used.
Quantification of Cation Sorption to Engineered Barrier Materials Under Extreme Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Brian; Schlautman, Mark; Rao, Linfeng
The objective of this research is to examine mechanisms and thermodynamics of actinide sorption to engineered barrier materials (iron (oxyhydr)oxides and bentonite clay) for nuclear waste repositories under high temperature and high ionic strength conditions using a suite of macroscopic and microscopic techniques which will be coupled with interfacial reaction models. Gaining a mechanistic understanding of interfacial processes governing the sorption/sequestration of actinides at mineral-water interfaces is fundamental for the accurate prediction of actinide behavior in waste repositories. Although macroscale sorption data and various spectroscopic techniques have provided valuable information regarding speciation of actinides at solid-water interfaces, significant knowledge gapsmore » still exist with respect to sorption mechanisms and the ability to quantify sorption, particularly at high temperatures and ionic strengths. This objective is addressed through three major tasks: (1) influence of oxidation state on actinide sorption to iron oxides and clay minerals at elevated temperatures and ionic strengths; (2) calorimetric titrations of actinide-mineral suspensions; (3) evaluation of bentonite performance under repository conditions. The results of the work will include a qualitative conceptual model and a quantitative thermodynamic speciation model describing actinide partitioning to minerals and sediments, which is based upon a mechanistic understanding of specific sorption processes as determined from both micro-scale and macroscale experimental techniques. The speciation model will be a thermodynamic aqueous and surface complexation model of actinide interactions with mineral surfaces that is self-consistent with macroscopic batch sorption data, calorimetric and potentiometric titrations, X-ray absorption Spectroscopy (XAS, mainly Extended X-ray Absorption Fine Structure (EXAFS)), and electron microscopy analyses. The novelty of the proposed work lies largely in the unique system conditions which will be examined (i.e. elevated temperature and ionic strength) and the manner in which the surface complexation model will be developed in terms of specific surface species identified using XAS. These experiments will thus provide a fundamental understanding of the chemical and physical processes occurring at the solid-solution interface under expected repository conditions. Additionally, the focus on thermodynamic treatment of actinide ion interactions with minerals as proposed will provide information on the driving forces involved and contribute to the overall understanding of the high affinity many actinide ions have for oxide surfaces. The utility of this model will be demonstrated in this work through a series of advective and diffusive flow experiments.« less
ERIC Educational Resources Information Center
Kay, Robin H.; Knaack, Liesel
2009-01-01
Learning objects are interactive web-based tools that support the learning of specific concepts by enhancing, amplifying, and/or guiding the cognitive processes of learners. Research on the impact, effectiveness, and usefulness of learning objects is limited, partially because comprehensive, theoretically based, reliable, and valid evaluation…
Liberating Learning Object Design from the Learning Style of Student Instructional Designers
ERIC Educational Resources Information Center
Akpinar, Yavuz
2007-01-01
Learning objects are a new form of learning resource, and the design of these digital environments has many facets. To investigate senior instructional design students' use of reflection tools in designing learning objects, a series of studies was conducted using the Reflective Action Instructional Design and Learning Object Review Instrument…
Learning Objects and Gerontology
ERIC Educational Resources Information Center
Weinreich, Donna M.; Tompkins, Catherine J.
2006-01-01
Virtual AGE (vAGE) is an asynchronous educational environment that utilizes learning objects focused on gerontology and a learning anytime/anywhere philosophy. This paper discusses the benefits of asynchronous instruction and the process of creating learning objects. Learning objects are "small, reusable chunks of instructional media" Wiley…
Data repositories for medical education research: issues and recommendations.
Schwartz, Alan; Pappas, Cleo; Sandlow, Leslie J
2010-05-01
The authors explore issues surrounding digital repositories with the twofold intention of clarifying their creation, structure, content, and use, and considering the implementation of a global digital repository for medical education research data sets-an online site where medical education researchers would be encouraged to deposit their data in order to facilitate the reuse and reanalysis of the data by other researchers. By motivating data sharing and reuse, investigators, medical schools, and other stakeholders might see substantial benefits to their own endeavors and to the progress of the field of medical education.The authors review digital repositories in medicine, social sciences, and education, describe the contents and scope of repositories, and present extant examples. The authors describe the potential benefits of a medical education data repository and report results of a survey of the Society for Directors of Research in Medicine Education, in which participants responded to questions about data sharing and a potential data repository. Respondents strongly endorsed data sharing, with the caveat that principal investigators should choose whether or not to share data they collect. A large majority believed that a repository would benefit their unit and the field of medical education. Few reported using existing repositories. Finally, the authors consider challenges to the establishment of such a repository, including taxonomic organization, intellectual property concerns, human subjects protection, technological infrastructure, and evaluation standards. The authors conclude with recommendations for how a medical education data repository could be successfully developed.
48 CFR 227.7207 - Contractor data repositories.
Code of Federal Regulations, 2010 CFR
2010-10-01
... repositories. 227.7207 Section 227.7207 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... Computer Software and Computer Software Documentation 227.7207 Contractor data repositories. Follow 227.7108 when it is in the Government's interests to have a data repository include computer software or to...
75 FR 70310 - Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-17
... Consumer Protection Act governing the security-based swap data repository registration process, the duties of such repositories, and the core principles applicable to such repositories. 4. The Commission will... security-based swap data repositories or the Commission and the public dissemination of security-based swap...
Detecting unresolved binary stars in Euclid VIS images
NASA Astrophysics Data System (ADS)
Kuntzer, T.; Courbin, F.
2017-10-01
Measuring a weak gravitational lensing signal to the level required by the next generation of space-based surveys demands exquisite reconstruction of the point-spread function (PSF). However, unresolved binary stars can significantly distort the PSF shape. In an effort to mitigate this bias, we aim at detecting unresolved binaries in realistic Euclid stellar populations. We tested methods in numerical experiments where (I) the PSF shape is known to Euclid requirements across the field of view; and (II) the PSF shape is unknown. We drew simulated catalogues of PSF shapes for this proof-of-concept paper. Following the Euclid survey plan, the objects were observed four times. We propose three methods to detect unresolved binary stars. The detection is based on the systematic and correlated biases between exposures of the same object. One method is a simple correlation analysis, while the two others use supervised machine-learning algorithms (random forest and artificial neural network). In both experiments, we demonstrate the ability of our methods to detect unresolved binary stars in simulated catalogues. The performance depends on the level of prior knowledge of the PSF shape and the shape measurement errors. Good detection performances are observed in both experiments. Full complexity, in terms of the images and the survey design, is not included, but key aspects of a more mature pipeline are discussed. Finding unresolved binaries in objects used for PSF reconstruction increases the quality of the PSF determination at arbitrary positions. We show, using different approaches, that we are able to detect at least binary stars that are most damaging for the PSF reconstruction process. The code corresponding to the algorithms used in this work and all scripts to reproduce the results are publicly available from a GitHub repository accessible via http://lastro.epfl.ch/software
Core Certification of Data Repositories: Trustworthiness and Long-Term Stewardship
NASA Astrophysics Data System (ADS)
de Sherbinin, A. M.; Mokrane, M.; Hugo, W.; Sorvari, S.; Harrison, S.
2017-12-01
Scientific integrity and norms dictate that data created and used by scientists should be managed, curated, and archived in trustworthy data repositories thus ensuring that science is verifiable and reproducible while preserving the initial investment in collecting data. Research stakeholders including researchers, science funders, librarians, and publishers must also be able to establish the trustworthiness of data repositories they use to confirm that the data they submit and use remain useful and meaningful in the long term. Data repositories are increasingly recognized as a key element of the global research infrastructure and the importance of establishing their trustworthiness is recognised as a prerequisite for efficient scientific research and data sharing. The Core Trustworthy Data Repository Requirements are a set of universal requirements for certification of data repositories at the core level (see: https://goo.gl/PYsygW). They were developed by the ICSU World Data System (WDS: www.icsu-wds.org) and the Data Seal of Approval (DSA: www.datasealofapproval.org)—the two authoritative organizations responsible for the development and implementation of this standard to be further developed under the CoreTrustSeal branding . CoreTrustSeal certification of data repositories involves a minimally intensive process whereby repositories supply evidence that they are sustainable and trustworthy. Repositories conduct a self-assessment which is then reviewed by community peers. Based on this review CoreTrustSeal certification is granted by the CoreTrustSeal Standards and Certification Board. Certification helps data communities—producers, repositories, and consumers—to improve the quality and transparency of their processes, and to increase awareness of and compliance with established standards. This presentation will introduce the CoreTrustSeal certification requirements for repositories and offer an opportunity to discuss ways to improve the contribution of certified data repositories to sustain open data for open scientific research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valvoda, Z.; Holub, J.; Kucerka, M.
1996-12-31
In the year 1993, began the Program of Development of the Spent Fuel and High Level Waste Repository in the Conditions of the Czech Republic. During the first phase, the basic concept and structure of the Program has been developed, and the basic design criteria and requirements were prepared. In the conditions of the Czech Republic, only an underground repository in deep geological formation is acceptable. Expected depth is between 500 to 1000 meters and as host rock will be granites. A preliminary variant design study was realized in 1994, that analyzed the radioactive waste and spent fuel flow frommore » NPPs to the repository, various possibilities of transportation in accordance to the various concepts of spent fuel conditioning and transportation to the underground structures. Conditioning and encapsulation of spent fuel and/or radioactive waste is proposed on the repository site. Underground disposal structures are proposed at one underground floor. The repository will have reserve capacity for radioactive waste from NPPs decommissioning and for waste non acceptable to other repositories. Vertical disposal of unshielded canisters in boreholes and/or horizontal disposal of shielded canisters is studied. As the base term of the start up of the repository operation, the year 2035 has been established. From this date, a preliminary time schedule of the Project has been developed. A method of calculating leveled and discounted costs within the repository lifetime, for each of selected 5 variants, was used for economic calculations. Preliminary expected parametric costs of the repository are about 0,1 Kc ($0.004) per MWh, produced in the Czech NPPs. In 1995, the design and feasibility study has gone in more details to the technical concept of repository construction and proposed technologies, as well as to the operational phase of the repository. Paper will describe results of the 1995 design work and will present the program of the repository development in next period.« less
17 CFR 49.26 - Disclosure requirements of swap data repositories.
Code of Federal Regulations, 2014 CFR
2014-04-01
... data repository's policies and procedures reasonably designed to protect the privacy of any and all... swap data repositories. 49.26 Section 49.26 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION (CONTINUED) SWAP DATA REPOSITORIES § 49.26 Disclosure requirements of swap data...
Unwin, Ian; Jansen-van der Vliet, Martine; Westenbrink, Susanne; Presser, Karl; Infanger, Esther; Porubska, Janka; Roe, Mark; Finglas, Paul
2016-02-15
The EuroFIR Document and Data Repositories are being developed as accessible collections of source documents, including grey literature, and the food composition data reported in them. These Repositories will contain source information available to food composition database compilers when selecting their nutritional data. The Document Repository was implemented as searchable bibliographic records in the Europe PubMed Central database, which links to the documents online. The Data Repository will contain original data from source documents in the Document Repository. Testing confirmed the FoodCASE food database management system as a suitable tool for the input, documentation and quality assessment of Data Repository information. Data management requirements for the input and documentation of reported analytical results were established, including record identification and method documentation specifications. Document access and data preparation using the Repositories will provide information resources for compilers, eliminating duplicated work and supporting unambiguous referencing of data contributing to their compiled data. Copyright © 2014 Elsevier Ltd. All rights reserved.
Making research data repositories visible: the re3data.org Registry.
Pampel, Heinz; Vierkant, Paul; Scholze, Frank; Bertelmann, Roland; Kindling, Maxi; Klump, Jens; Goebelbecker, Hans-Jürgen; Gundlach, Jens; Schirmbacher, Peter; Dierolf, Uwe
2013-01-01
Researchers require infrastructures that ensure a maximum of accessibility, stability and reliability to facilitate working with and sharing of research data. Such infrastructures are being increasingly summarized under the term Research Data Repositories (RDR). The project re3data.org-Registry of Research Data Repositories-has begun to index research data repositories in 2012 and offers researchers, funding organizations, libraries and publishers an overview of the heterogeneous research data repository landscape. In July 2013 re3data.org lists 400 research data repositories and counting. 288 of these are described in detail using the re3data.org vocabulary. Information icons help researchers to easily identify an adequate repository for the storage and reuse of their data. This article describes the heterogeneous RDR landscape and presents a typology of institutional, disciplinary, multidisciplinary and project-specific RDR. Further the article outlines the features of re3data.org, and shows how this registry helps to identify appropriate repositories for storage and search of research data.
PGP repository: a plant phenomics and genomics data publication infrastructure.
Arend, Daniel; Junker, Astrid; Scholz, Uwe; Schüler, Danuta; Wylie, Juliane; Lange, Matthias
2016-01-01
Plant genomics and phenomics represents the most promising tools for accelerating yield gains and overcoming emerging crop productivity bottlenecks. However, accessing this wealth of plant diversity requires the characterization of this material using state-of-the-art genomic, phenomic and molecular technologies and the release of subsequent research data via a long-term stable, open-access portal. Although several international consortia and public resource centres offer services for plant research data management, valuable digital assets remains unpublished and thus inaccessible to the scientific community. Recently, the Leibniz Institute of Plant Genetics and Crop Plant Research and the German Plant Phenotyping Network have jointly initiated the Plant Genomics and Phenomics Research Data Repository (PGP) as infrastructure to comprehensively publish plant research data. This covers in particular cross-domain datasets that are not being published in central repositories because of its volume or unsupported data scope, like image collections from plant phenotyping and microscopy, unfinished genomes, genotyping data, visualizations of morphological plant models, data from mass spectrometry as well as software and documents.The repository is hosted at Leibniz Institute of Plant Genetics and Crop Plant Research using e!DAL as software infrastructure and a Hierarchical Storage Management System as data archival backend. A novel developed data submission tool was made available for the consortium that features a high level of automation to lower the barriers of data publication. After an internal review process, data are published as citable digital object identifiers and a core set of technical metadata is registered at DataCite. The used e!DAL-embedded Web frontend generates for each dataset a landing page and supports an interactive exploration. PGP is registered as research data repository at BioSharing.org, re3data.org and OpenAIRE as valid EU Horizon 2020 open data archive. Above features, the programmatic interface and the support of standard metadata formats, enable PGP to fulfil the FAIR data principles-findable, accessible, interoperable, reusable.Database URL:http://edal.ipk-gatersleben.de/repos/pgp/. © The Author(s) 2016. Published by Oxford University Press.
A Semantically Enabled Metadata Repository for Solar Irradiance Data Products
NASA Astrophysics Data System (ADS)
Wilson, A.; Cox, M.; Lindholm, D. M.; Nadiadi, I.; Traver, T.
2014-12-01
The Laboratory for Atmospheric and Space Physics, LASP, has been conducting research in Atmospheric and Space science for over 60 years, and providing the associated data products to the public. LASP has a long history, in particular, of making space-based measurements of the solar irradiance, which serves as crucial input to several areas of scientific research, including solar-terrestrial interactions, atmospheric, and climate. LISIRD, the LASP Interactive Solar Irradiance Data Center, serves these datasets to the public, including solar spectral irradiance (SSI) and total solar irradiance (TSI) data. The LASP extended metadata repository, LEMR, is a database of information about the datasets served by LASP, such as parameters, uncertainties, temporal and spectral ranges, current version, alerts, etc. It serves as the definitive, single source of truth for that information. The database is populated with information garnered via web forms and automated processes. Dataset owners keep the information current and verified for datasets under their purview. This information can be pulled dynamically for many purposes. Web sites such as LISIRD can include this information in web page content as it is rendered, ensuring users get current, accurate information. It can also be pulled to create metadata records in various metadata formats, such as SPASE (for heliophysics) and ISO 19115. Once these records are be made available to the appropriate registries, our data will be discoverable by users coming in via those organizations. The database is implemented as a RDF triplestore, a collection of instances of subject-object-predicate data entities identifiable with a URI. This capability coupled with SPARQL over HTTP read access enables semantic queries over the repository contents. To create the repository we leveraged VIVO, an open source semantic web application, to manage and create new ontologies and populate repository content. A variety of ontologies were used in creating the triplestore, including ontologies that came with VIVO such as FOAF. Also, the W3C DCAT ontology was integrated and extended to describe properties of our data products that we needed to capture, such as spectral range. The presentation will describe the architecture, ontology issues, and tools used to create LEMR and plans for its evolution.
Rolling Deck to Repository (R2R): Products and Services for the U.S. Research Fleet Community
NASA Astrophysics Data System (ADS)
Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Smith, S. R.; Stocks, K. I.
2016-02-01
The Rolling Deck to Repository (R2R) program is working to ensure open access to environmental sensor data routinely acquired by the U.S. academic research fleet. Currently 25 vessels deliver 7 TB/year of data to R2R from a suite of geophysical, oceanographic, meteorological, and navigational sensors on over 400 cruises worldwide. R2R ensures these data are preserved in trusted repositories, discoverable via standard protocols, and adequately documented for reuse. R2R has recently expanded to include the vessels Sikuliaq, operated by the University of Alaska; Falkor, operated by the Schmidt Ocean Institute; and Ronald H. Brown and Okeanos Explorer, operated by NOAA. R2R maintains a master catalog of U.S. research cruises, currently holding over 4,670 expeditions including vessel and cruise identifiers, start/end dates and ports, project titles and funding awards, science parties, dataset inventories with instrument types and file formats, data quality assessments, and links to related content at other repositories. Standard post-field cruise products are published including shiptrack navigation, near-real-time MET/TSG data, underway geophysical profiles, and CTD profiles. Software tools available to users include the R2R Event Logger and the R2R Nav Manager. A Digital Object Identifier (DOI) is published for each cruise, original field sensor dataset, standard post-field product, and document (e.g. cruise report) submitted by the science party. Scientists are linked to personal identifiers such as ORCIDs where available. Using standard identifiers such as DOIs and ORCIDs facilitates linking with journal publications and generation of citation metrics. R2R collaborates in the Ocean Data Interoperability Platform (ODIP) to strengthen links among regional and national data systems, populates U.S. cruises in the POGO global catalog, and is working toward membership in the DataONE alliance. It is a lead partner in the EarthCube GeoLink project, developing Semantic Web technologies to share data and documentation between repositories, and in the newly-launched EarthCube SeaView project, delivering data from R2R and other ocean data facilities to scientists using the Ocean Data View (ODV) software tool.
17 CFR 49.19 - Core principles applicable to registered swap data repositories.
Code of Federal Regulations, 2014 CFR
2014-04-01
... registered swap data repositories. 49.19 Section 49.19 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION (CONTINUED) SWAP DATA REPOSITORIES § 49.19 Core principles applicable to registered swap data repositories. (a) Compliance with core principles. To be registered, and maintain...
17 CFR 49.26 - Disclosure requirements of swap data repositories.
Code of Federal Regulations, 2013 CFR
2013-04-01
... TRADING COMMISSION SWAP DATA REPOSITORIES § 49.26 Disclosure requirements of swap data repositories... swap data repository shall furnish to the reporting entity a disclosure document that contains the... 17 Commodity and Securities Exchanges 1 2013-04-01 2013-04-01 false Disclosure requirements of...
17 CFR 49.26 - Disclosure requirements of swap data repositories.
Code of Federal Regulations, 2012 CFR
2012-04-01
... TRADING COMMISSION SWAP DATA REPOSITORIES § 49.26 Disclosure requirements of swap data repositories... swap data repository shall furnish to the reporting entity a disclosure document that contains the... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Disclosure requirements of...
21 CFR 522.480 - Repository corticotropin injection.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Repository corticotropin injection. 522.480 Section 522.480 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 522.480 Repository corticotropin injection. (a)(1) Specifications. The drug conforms to repository...
10 CFR 960.3-1-3 - Regionality.
Code of Federal Regulations, 2010 CFR
2010-01-01
... REPOSITORY Implementation Guidelines § 960.3-1-3 Regionality. In making site recommendations for repository development after the site for the first repository has been recommended, the Secretary shall give due... repositories. Such consideration shall take into account the proximity of sites to locations at which waste is...
Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm.
Wu, Haizhou; Zhou, Yongquan; Luo, Qifang; Basset, Mohamed Abdel
2016-01-01
Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared.
Musinguzi, Henry; Lwanga, Newton; Kezimbira, Dafala; Kigozi, Edgar; Katabazi, Fred Ashaba; Wayengera, Misaki; Joloba, Moses Lutaakome; Abayomi, Emmanuel Akin; Swanepoel, Carmen; Croxton, Talishiea; Ozumba, Petronilla; Thankgod, Anazodo; van Zyl, Lizelle; Mayne, Elizabeth Sarah; Kader, Mukthar; Swartz, Garth
2017-01-01
Biorepositories in Africa need significant infrastructural support to meet International Society for Biological and Environmental Repositories (ISBER) Best Practices to support population-based genomics research. ISBER recommends a biorepository information management system which can manage workflows from biospecimen receipt to distribution. The H3Africa Initiative set out to develop regional African biorepositories where Uganda, Nigeria, and South Africa were successfully awarded grants to develop the state-of-the-art biorepositories. The biorepositories carried out an elaborate process to evaluate and choose a laboratory information management system (LIMS) with the aim of integrating the three geographically distinct sites. In this article, we review the processes, African experience, lessons learned, and make recommendations for choosing a biorepository LIMS in the African context.
Development of a user-centered radiology teaching file system
NASA Astrophysics Data System (ADS)
dos Santos, Marcelo; Fujino, Asa
2011-03-01
Learning radiology requires systematic and comprehensive study of a large knowledge base of medical images. In this work is presented the development of a digital radiology teaching file system. The proposed system has been created in order to offer a set of customized services regarding to users' contexts and their informational needs. This has been done by means of an electronic infrastructure that provides easy and integrated access to all relevant patient data at the time of image interpretation, so that radiologists and researchers can examine all available data to reach well-informed conclusions, while protecting patient data privacy and security. The system is presented such as an environment which implements a distributed clinical database, including medical images, authoring tools, repository for multimedia documents, and also a peer-reviewed model which assures dataset quality. The current implementation has shown that creating clinical data repositories on networked computer environments points to be a good solution in terms of providing means to review information management practices in electronic environments and to create customized and contextbased tools for users connected to the system throughout electronic interfaces.
Extensible Probabilistic Repository Technology (XPRT)
2004-10-01
projects, such as, Centaurus , Evidence Data Base (EDB), etc., others were fabricated, such as INS and FED, while others contain data from the open...Google Web Report Unlimited SOAP API News BBC News Unlimited WEB RSS 1.0 Centaurus Person Demographics 204,402 people from 240 countries...objects of the domain ontology map to the various simulated data-sources. For example, the PersonDemographics are stored in the Centaurus database, while
Persistent Identifiers for Field Expeditions: A Next Step for the US Oceanographic Research Fleet
NASA Astrophysics Data System (ADS)
Arko, Robert; Carbotte, Suzanne; Chandler, Cynthia; Smith, Shawn; Stocks, Karen
2016-04-01
Oceanographic research cruises are complex affairs, typically requiring an extensive effort to secure the funding, plan the experiment, and mobilize the field party. Yet cruises are not typically published online as first-class digital objects with persistent, citable identifiers linked to the scientific literature. The Rolling Deck to Repository (R2R; info@rvdata.us) program maintains a master catalog of oceanographic cruises for the United States research fleet, currently documenting over 6,000 expeditions on 37 active and retired vessels. In 2015, R2R started routinely publishing a Digital Object Identifier (DOI) for each completed cruise. Cruise DOIs, in turn, are linked to related persistent identifiers where available including the Open Researcher and Contributor ID (ORCID) for members of the science party, the International Geo Sample Number (IGSN) for physical specimens collected during the cruise, the Open Funder Registry (FundRef) codes that supported the experiment, and additional DOIs for datasets, journal articles, and other products resulting from the cruise. Publishing a persistent identifier for each field expedition will facilitate interoperability between the many different repositories that hold research products from cruises; will provide credit to the investigators who secured the funding and carried out the experiment; and will facilitate the gathering of fleet-wide altmetrics that demonstrate the broad impact of oceanographic research.
panMetaDocs and DataSync - providing a convenient way to share and publish research data
NASA Astrophysics Data System (ADS)
Ulbricht, D.; Klump, J. F.
2013-12-01
In recent years research institutions, geological surveys and funding organizations started to build infrastructures to facilitate the re-use of research data from previous work. At present, several intermeshed activities are coordinated to make data systems of the earth sciences interoperable and recorded data discoverable. Driven by governmental authorities, ISO19115/19139 emerged as metadata standards for discovery of data and services. Established metadata transport protocols like OAI-PMH and OGC-CSW are used to disseminate metadata to data portals. With the persistent identifiers like DOI and IGSN research data and corresponding physical samples can be given unambiguous names and thus become citable. In summary, these activities focus primarily on 'ready to give away'-data, already stored in an institutional repository and described with appropriate metadata. Many datasets are not 'born' in this state but are produced in small and federated research projects. To make access and reuse of these 'small data' easier, these data should be centrally stored and version controlled from the very beginning of activities. We developed DataSync [1] as supplemental application to the panMetaDocs [2] data exchange platform as a data management tool for small science projects. DataSync is a JAVA-application that runs on a local computer and synchronizes directory trees into an eSciDoc-repository [3] by creating eSciDoc-objects via eSciDocs' REST API. DataSync can be installed on multiple computers and is in this way able to synchronize files of a research team over the internet. XML Metadata can be added as separate files that are managed together with data files as versioned eSciDoc-objects. A project-customized instance of panMetaDocs is provided to show a web-based overview of the previously uploaded file collection and to allow further annotation with metadata inside the eSciDoc-repository. PanMetaDocs is a PHP based web application to assist the creation of metadata in any XML-based metadata schema. To reduce manual entries of metadata to a minimum and make use of contextual information in a project setting, metadata fields can be populated with static or dynamic content. Access rights can be defined to control visibility and access to stored objects. Notifications about recently updated datasets are available by RSS and e-mail and the entire inventory can be harvested via OAI-PMH. panMetaDocs is optimized to be harvested by panFMP [4]. panMetaDocs is able to mint dataset DOIs though DataCite and uses eSciDocs' REST API to transfer eSciDoc-objects from a non-public 'pending'-status to the published status 'released', which makes data and metadata of the published object available worldwide through the internet. The application scenario presented here shows the adoption of open source applications to data sharing and publication of data. An eSciDoc-repository is used as storage for data and metadata. DataSync serves as a file ingester and distributor, whereas panMetaDocs' main function is to annotate the dataset files with metadata to make them ready for publication and sharing with your own team, or with the scientific community.
ERIC Educational Resources Information Center
Wanapu, Supachanun; Fung, Chun Che; Kerdprasop, Nittaya; Chamnongsri, Nisachol; Niwattanakul, Suphakit
2016-01-01
The issues of accessibility, management, storage and organization of Learning Objects (LOs) in education systems are a high priority of the Thai Government. Incorporating personalized learning or learning styles in a learning object management system to improve the accessibility of LOs has been addressed continuously in the Thai education system.…
17 CFR 49.19 - Core principles applicable to registered swap data repositories.
Code of Federal Regulations, 2013 CFR
2013-04-01
... registered swap data repositories. 49.19 Section 49.19 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION SWAP DATA REPOSITORIES § 49.19 Core principles applicable to registered swap data repositories. (a) Compliance with core principles. To be registered, and maintain registration, a swap data...
17 CFR 49.19 - Core principles applicable to registered swap data repositories.
Code of Federal Regulations, 2012 CFR
2012-04-01
... registered swap data repositories. 49.19 Section 49.19 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION SWAP DATA REPOSITORIES § 49.19 Core principles applicable to registered swap data repositories. (a) Compliance with Core Principles. To be registered, and maintain registration, a swap data...
17 CFR 49.22 - Chief compliance officer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... that the registered swap data repository provide fair and open access as set forth in § 49.27 of this...) SWAP DATA REPOSITORIES § 49.22 Chief compliance officer. (a) Definition of Board of Directors. For... data repository, or for those swap data repositories whose organizational structure does not include a...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-25
... Construction of a Waste Repository on the Settlors' Property Pursuant to the Comprehensive Environmental... a Settlement Agreement pertaining to Construction of a Waste Repository on Settlor's Property... waste repository on the property by resolving, liability the settling party might otherwise incur under...
10 CFR 51.67 - Environmental information concerning geologic repositories.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Environmental information concerning geologic repositories... information concerning geologic repositories. (a) In lieu of an environmental report, the Department of Energy... connection with any geologic repository developed under Subtitle A of Title I, or under Title IV, of the...
15 CFR 1180.10 - NTIS permanent repository.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false NTIS permanent repository. 1180.10... ENGINEERING INFORMATION TO THE NATIONAL TECHNICAL INFORMATION SERVICE § 1180.10 NTIS permanent repository. A... repository as a service to agencies unless the Director advises the Liaison Officer that it has not been so...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-15
...] Center for Devices and Radiological Health 510(k) Implementation: Online Repository of Medical Device... public meeting entitled ``510(k) Implementation: Discussion of an Online Repository of Medical Device... establish an online public repository of medical device labeling and strategies for displaying device...
Identifying Tensions in the Use of Open Licenses in OER Repositories
ERIC Educational Resources Information Center
Amiel, Tel; Soares, Tiago Chagas
2016-01-01
We present an analysis of 50 repositories for educational content conducted through an "audit system" that helped us classify these repositories, their software systems, promoters, and how they communicated their licensing practices. We randomly accessed five resources from each repository to investigate the alignment of licensing…
10 CFR 960.3-1 - Siting provisions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... REPOSITORY Implementation Guidelines § 960.3-1 Siting provisions. The siting provisions establish the... repositories. As required by the Act, § 960.3-1-3 specifies consideration of a regional distribution of repositories after recommendation of a site for development of the first repository. Section 960.3-1-4...
Code of Federal Regulations, 2010 CFR
2010-01-01
... geologic repository operations area. 60.132 Section 60.132 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES Technical Criteria Design Criteria for the Geologic Repository Operations Area § 60.132 Additional design criteria for surface facilities in...
Institutional Repositories as Infrastructures for Long-Term Preservation
ERIC Educational Resources Information Center
Francke, Helena; Gamalielsson, Jonas; Lundell, Björn
2017-01-01
Introduction: The study describes the conditions for long-term preservation of the content of the institutional repositories of Swedish higher education institutions based on an investigation of how deposited files are managed with regards to file format and how representatives of the repositories describe the functions of the repositories.…
Personal Name Identification in the Practice of Digital Repositories
ERIC Educational Resources Information Center
Xia, Jingfeng
2006-01-01
Purpose: To propose improvements to the identification of authors' names in digital repositories. Design/methodology/approach: Analysis of current name authorities in digital resources, particularly in digital repositories, and analysis of some features of existing repository applications. Findings: This paper finds that the variations of authors'…
System and method for responding to ground and flight system malfunctions
NASA Technical Reports Server (NTRS)
Anderson, Julie J. (Inventor); Fussell, Ronald M. (Inventor)
2010-01-01
A system for on-board anomaly resolution for a vehicle has a data repository. The data repository stores data related to different systems, subsystems, and components of the vehicle. The data stored is encoded in a tree-based structure. A query engine is coupled to the data repository. The query engine provides a user and automated interface and provides contextual query to the data repository. An inference engine is coupled to the query engine. The inference engine compares current anomaly data to contextual data stored in the data repository using inference rules. The inference engine generates a potential solution to the current anomaly by referencing the data stored in the data repository.
Comparing the Hierarchy of Keywords in On-Line News Portals
Tibély, Gergely; Sousa-Rodrigues, David; Pollner, Péter; Palla, Gergely
2016-01-01
Hierarchical organization is prevalent in networks representing a wide range of systems in nature and society. An important example is given by the tag hierarchies extracted from large on-line data repositories such as scientific publication archives, file sharing portals, blogs, on-line news portals, etc. The tagging of the stored objects with informative keywords in such repositories has become very common, and in most cases the tags on a given item are free words chosen by the authors independently. Therefore, the relations among keywords appearing in an on-line data repository are unknown in general. However, in most cases the topics and concepts described by these keywords are forming a latent hierarchy, with the more general topics and categories at the top, and more specialized ones at the bottom. There are several algorithms available for deducing this hierarchy from the statistical features of the keywords. In the present work we apply a recent, co-occurrence-based tag hierarchy extraction method to sets of keywords obtained from four different on-line news portals. The resulting hierarchies show substantial differences not just in the topics rendered as important (being at the top of the hierarchy) or of less interest (categorized low in the hierarchy), but also in the underlying network structure. This reveals discrepancies between the plausible keyword association frameworks in the studied news portals. PMID:27802319
Permanent Disposal of Nuclear Waste in Salt
NASA Astrophysics Data System (ADS)
Hansen, F. D.
2016-12-01
Salt formations hold promise for eternal removal of nuclear waste from our biosphere. Germany and the United States have ample salt formations for this purpose, ranging from flat-bedded formations to geologically mature dome structures. Both nations are revisiting nuclear waste disposal options, accompanied by extensive collaboration on applied salt repository research, design, and operation. Salt formations provide isolation while geotechnical barriers reestablish impermeability after waste is placed in the geology. Between excavation and closure, physical, mechanical, thermal, chemical, and hydrological processes ensue. Salt response over a range of stress and temperature has been characterized for decades. Research practices employ refined test techniques and controls, which improve parameter assessment for features of the constitutive models. Extraordinary computational capabilities require exacting understanding of laboratory measurements and objective interpretation of modeling results. A repository for heat-generative nuclear waste provides an engineering challenge beyond common experience. Long-term evolution of the underground setting is precluded from direct observation or measurement. Therefore, analogues and modeling predictions are necessary to establish enduring safety functions. A strong case for granular salt reconsolidation and a focused research agenda support salt repository concepts that include safety-by-design. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. Author: F. D. Hansen, Sandia National Laboratories
3-D printing provides a novel approach for standardization and reproducibility of freezing devices
Hu, E; Childress, William; Tiersch, Terrence R.
2017-01-01
Cryopreservation has become an important and accepted tool for long-term germplasm conservation of animals and plants. To protect genetic resources, repositories have been developed with national and international cooperation. For a repository to be effective, the genetic material submitted must be of good quality and comparable to other submissions. However, due to a variety of reasons, including constraints in knowledge and available resources, cryopreservation methods for aquatic species vary widely across user groups which reduces reproducibility and weakens quality control. Herein we describe a standardizable freezing device produced using 3-dimensional (3-D) printing and introduce the concept of network sharing to achieve aggregate high-throughput cryopreservation for aquatic species. The objectives were to: 1) adapt widely available polystyrene foam products that would be inexpensive, portable, and provide adequate work space; 2) develop a design suitable for 3-D printing that could provide multiple configurations, be inexpensive, and easy to use, and 3) evaluate various configurations to attain freezing rates suitable for various common cryopreservation containers. Through this approach, identical components can be accessed globally, and we demonstrated that 3-D printers can be used to fabricate parts for standardizable freezing devices yielding relevant and reproducible cooling rates across users. With standardized devices for freezing, methods and samples can harmonize into an aggregated high-throughput pathway not currently available for aquatic species repository development. PMID:28465185
Comparing the Hierarchy of Keywords in On-Line News Portals.
Tibély, Gergely; Sousa-Rodrigues, David; Pollner, Péter; Palla, Gergely
2016-01-01
Hierarchical organization is prevalent in networks representing a wide range of systems in nature and society. An important example is given by the tag hierarchies extracted from large on-line data repositories such as scientific publication archives, file sharing portals, blogs, on-line news portals, etc. The tagging of the stored objects with informative keywords in such repositories has become very common, and in most cases the tags on a given item are free words chosen by the authors independently. Therefore, the relations among keywords appearing in an on-line data repository are unknown in general. However, in most cases the topics and concepts described by these keywords are forming a latent hierarchy, with the more general topics and categories at the top, and more specialized ones at the bottom. There are several algorithms available for deducing this hierarchy from the statistical features of the keywords. In the present work we apply a recent, co-occurrence-based tag hierarchy extraction method to sets of keywords obtained from four different on-line news portals. The resulting hierarchies show substantial differences not just in the topics rendered as important (being at the top of the hierarchy) or of less interest (categorized low in the hierarchy), but also in the underlying network structure. This reveals discrepancies between the plausible keyword association frameworks in the studied news portals.
ERIC Educational Resources Information Center
Paulsson, Fredrik; Naeve, Ambjorn
2006-01-01
Based on existing Learning Object taxonomies, this article suggests an alternative Learning Object taxonomy, combined with a general Service Oriented Architecture (SOA) framework, aiming to transfer the modularized concept of Learning Objects to modularized Virtual Learning Environments. The taxonomy and SOA-framework exposes a need for a clearer…
Fenwick, Matthew; Sesanker, Colbert; Schiller, Martin R.; Ellis, Heidi JC; Hinman, M. Lee; Vyas, Jay; Gryk, Michael R.
2012-01-01
Scientists are continually faced with the need to express complex mathematical notions in code. The renaissance of functional languages such as LISP and Haskell is often credited to their ability to implement complex data operations and mathematical constructs in an expressive and natural idiom. The slow adoption of functional computing in the scientific community does not, however, reflect the congeniality of these fields. Unfortunately, the learning curve for adoption of functional programming techniques is steeper than that for more traditional languages in the scientific community, such as Python and Java, and this is partially due to the relative sparseness of available learning resources. To fill this gap, we demonstrate and provide applied, scientifically substantial examples of functional programming, We present a multi-language source-code repository for software integration and algorithm development, which generally focuses on the fields of machine learning, data processing, bioinformatics. We encourage scientists who are interested in learning the basics of functional programming to adopt, reuse, and learn from these examples. The source code is available at: https://github.com/CONNJUR/CONNJUR-Sandbox (see also http://www.connjur.org). PMID:25328913
Fenwick, Matthew; Sesanker, Colbert; Schiller, Martin R; Ellis, Heidi Jc; Hinman, M Lee; Vyas, Jay; Gryk, Michael R
2012-01-01
Scientists are continually faced with the need to express complex mathematical notions in code. The renaissance of functional languages such as LISP and Haskell is often credited to their ability to implement complex data operations and mathematical constructs in an expressive and natural idiom. The slow adoption of functional computing in the scientific community does not, however, reflect the congeniality of these fields. Unfortunately, the learning curve for adoption of functional programming techniques is steeper than that for more traditional languages in the scientific community, such as Python and Java, and this is partially due to the relative sparseness of available learning resources. To fill this gap, we demonstrate and provide applied, scientifically substantial examples of functional programming, We present a multi-language source-code repository for software integration and algorithm development, which generally focuses on the fields of machine learning, data processing, bioinformatics. We encourage scientists who are interested in learning the basics of functional programming to adopt, reuse, and learn from these examples. The source code is available at: https://github.com/CONNJUR/CONNJUR-Sandbox (see also http://www.connjur.org).
Informatics and machine learning to define the phenotype.
Basile, Anna Okula; Ritchie, Marylyn DeRiggi
2018-03-01
For the past decade, the focus of complex disease research has been the genotype. From technological advancements to the development of analysis methods, great progress has been made. However, advances in our definition of the phenotype have remained stagnant. Phenotype characterization has recently emerged as an exciting area of informatics and machine learning. The copious amounts of diverse biomedical data that have been collected may be leveraged with data-driven approaches to elucidate trait-related features and patterns. Areas covered: In this review, the authors discuss the phenotype in traditional genetic associations and the challenges this has imposed.Approaches for phenotype refinement that can aid in more accurate characterization of traits are also discussed. Further, the authors highlight promising machine learning approaches for establishing a phenotype and the challenges of electronic health record (EHR)-derived data. Expert commentary: The authors hypothesize that through unsupervised machine learning, data-driven approaches can be used to define phenotypes rather than relying on expert clinician knowledge. Through the use of machine learning and an unbiased set of features extracted from clinical repositories, researchers will have the potential to further understand complex traits and identify patient subgroups. This knowledge may lead to more preventative and precise clinical care.
17 CFR 49.9 - Duties of registered swap data repositories.
Code of Federal Regulations, 2014 CFR
2014-04-01
... privacy of any and all swap data and any other related information that the swap data repository receives... 17 Commodity and Securities Exchanges 2 2014-04-01 2014-04-01 false Duties of registered swap data... (CONTINUED) SWAP DATA REPOSITORIES § 49.9 Duties of registered swap data repositories. (a) Duties. To be...
Availability and Accessibility in an Open Access Institutional Repository: A Case Study
ERIC Educational Resources Information Center
Lee, Jongwook; Burnett, Gary; Vandegrift, Micah; Baeg, Jung Hoon; Morris, Richard
2015-01-01
Introduction: This study explores the extent to which an institutional repository makes papers available and accessible on the open Web by using 170 journal articles housed in DigiNole Commons, the institutional repository at Florida State University. Method: To analyse the repository's impact on availability and accessibility, we conducted…
Institutional Repositories in Indian Universities and Research Institutes: A Study
ERIC Educational Resources Information Center
Krishnamurthy, M.; Kemparaju, T. D.
2011-01-01
Purpose: The purpose of this paper is to report on a study of the institutional repositories (IRs) in use in Indian universities and research institutes. Design/methodology/approach: Repositories in various institutions in India were accessed and described in a standardised way. Findings: The 20 repositories studied covered collections of diverse…
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Emergency plan for the geologic repository operations area... OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Emergency Planning Criteria § 63.161 Emergency plan for the geologic repository operations area through permanent...
77 FR 26709 - Swap Data Repositories: Interpretative Statement Regarding the Confidentiality and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-07
... COMMODITY FUTURES TRADING COMMISSION 17 CFR Part 49 RIN 3038-AD83 Swap Data Repositories... data repositories (``SDRs'').SDRs are new registered entities created by section 728 of the Dodd-Frank... Act amends section 1a of the CEA to add a definition of the term ``swap data repository.'' Pursuant to...
Online Paper Repositories and the Role of Scholarly Societies: An AERA Conference Report
ERIC Educational Resources Information Center
Educational Researcher, 2010
2010-01-01
This article examines issues faced by scholarly societies that are developing and sustaining online paper repositories. It is based on the AERA Conference on Online Paper Repositories, which focused on fundamental issues of policy and procedure important to the operations of online working paper repositories. The report and recommendations address…
10 CFR 960.3-2-2 - Nomination of sites as suitable for characterization.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF POTENTIAL SITES FOR A NUCLEAR WASTE REPOSITORY Implementation Guidelines § 960.3-2-2 Nomination of... of each repository site. For the second repository, at least three of the sites shall not have been nominated previously. Any site nominated as suitable for characterization for the first repository, but not...
Repositories for Research: Southampton's Evolving Role in the Knowledge Cycle
ERIC Educational Resources Information Center
Simpson, Pauline; Hey, Jessie
2006-01-01
Purpose: To provide an overview of how open access (OA) repositories have grown to take a premier place in the e-research knowledge cycle and offer Southampton's route from project to sustainable institutional repository. Design/methodology/approach: The evolution of institutional repositories and OA is outlined raising questions of multiplicity…
Interoperability Across the Stewardship Spectrum in the DataONE Repository Federation
NASA Astrophysics Data System (ADS)
Jones, M. B.; Vieglais, D.; Wilson, B. E.
2016-12-01
Thousands of earth and environmental science repositories serve many researchers and communities, each with their own community and legal mandates, sustainability models, and historical infrastructure. These repositories span the stewardship spectrum from highly curated collections that employ large numbers of staff members to review and improve data, to small, minimal budget repositories that accept data caveat emptor and where all responsibility for quality lies with the submitter. Each repository fills a niche, providing services that meet the stewardship tradeoffs of one or more communities. We have reviewed these stewardship tradeoffs for several DataONE member repositories ranging from minimally (KNB) to highly curated (Arctic Data Center), as well as general purpose (Dryad) to highly discipline or project specific (NEON). The rationale behind different levels of stewardship reflect resolution of these tradeoffs. Some repositories aim to encourage extensive uptake by keeping processes simple and minimizing the amount of information collected, but this limits the long-term utility of the data and the search, discovery, and integration systems that are possible. Other repositories require extensive metadata input, review, and assessment, allowing for excellent preservation, discovery, and integration but at the cost of significant time for submitters and expense for curatorial staff. DataONE recognizes these different levels of curation, and attempts to embrace them to create a federation that is useful across the stewardship spectrum. DataONE provides a tiered model for repositories with growing utility of DataONE services at higher tiers of curation. The lowest tier supports read-only access to data and requires little more than title and contact metadata. Repositories can gradually phase in support for higher levels of metadata and services as needed. These tiered capabilities are possible through flexible support for multiple metadata standards and services, where repositories can incrementally increase their requirements as they want to satisfy more use cases. Within DataONE, metadata search services support minimal metadata models, but significantly expanded precision and recall become possible when repositories provide more extensively curated metadata.
Optimizing Resources for Trustworthiness and Scientific Impact of Domain Repositories
NASA Astrophysics Data System (ADS)
Lehnert, K.
2017-12-01
Domain repositories, i.e. data archives tied to specific scientific communities, are widely recognized and trusted by their user communities for ensuring a high level of data quality, enhancing data value, access, and reuse through a unique combination of disciplinary and digital curation expertise. Their data services are guided by the practices and values of the specific community they serve and designed to support the advancement of their science. Domain repositories need to meet user expectations for scientific utility in order to be successful, but they also need to fulfill the requirements for trustworthy repository services to be acknowledged by scientists, funders, and publishers as a reliable facility that curates and preserves data following international standards. Domain repositories therefore need to carefully plan and balance investments to optimize the scientific impact of their data services and user satisfaction on the one hand, while maintaining a reliable and robust operation of the repository infrastructure on the other hand. Staying abreast of evolving repository standards to certify as a trustworthy repository and conducting a regular self-assessment and certification alone requires resources that compete with the demands for improving data holdings or usability of systems. The Interdisciplinary Earth Data Alliance (IEDA), a data facility funded by the US National Science Foundation, operates repositories for geochemical, marine Geoscience, and Antarctic research data, while also maintaining data products (global syntheses) and data visualization and analysis tools that are of high value for the science community and have demonstrated considerable scientific impact. Balancing the investments in the growth and utility of the syntheses with resources required for certifcation of IEDA's repository services has been challenging, and a major self-assessment effort has been difficult to accommodate. IEDA is exploring a partnership model to share generic repository functions (e.g. metadata registration, long-term archiving) with other repositories. This could substantially reduce the effort of certification and allow effort to focus on the domain-specific data curation and value-added services.
ERIC Educational Resources Information Center
Lau, Siong-Hoe; Woods, Peter C.
2009-01-01
Many organisations and institutions have integrated learning objects into their e-learning systems to make the instructional resources more efficient. Like any other information systems, this trend has made user acceptance of learning objects an increasingly critical issue as a high level of learner satisfaction and acceptance reflects that the…
Learned filters for object detection in multi-object visual tracking
NASA Astrophysics Data System (ADS)
Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David
2016-05-01
We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.
NASA Astrophysics Data System (ADS)
Ward, Dennis W.; Bennett, Kelly W.
2017-05-01
The Sensor Information Testbed COllaberative Research Environment (SITCORE) and the Automated Online Data Repository (AODR) are significant enablers of the U.S. Army Research Laboratory (ARL)'s Open Campus Initiative and together create a highly-collaborative research laboratory and testbed environment focused on sensor data and information fusion. SITCORE creates a virtual research development environment allowing collaboration from other locations, including DoD, industry, academia, and collation facilities. SITCORE combined with AODR provides end-toend algorithm development, experimentation, demonstration, and validation. The AODR enterprise allows the U.S. Army Research Laboratory (ARL), as well as other government organizations, industry, and academia to store and disseminate multiple intelligence (Multi-INT) datasets collected at field exercises and demonstrations, and to facilitate research and development (R and D), and advancement of analytical tools and algorithms supporting the Intelligence, Surveillance, and Reconnaissance (ISR) community. The AODR provides a potential central repository for standards compliant datasets to serve as the "go-to" location for lessons-learned and reference products. Many of the AODR datasets have associated ground truth and other metadata which provides a rich and robust data suite for researchers to develop, test, and refine their algorithms. Researchers download the test data to their own environments using a sophisticated web interface. The AODR allows researchers to request copies of stored datasets and for the government to process the requests and approvals in an automated fashion. Access to the AODR requires two-factor authentication in the form of a Common Access Card (CAC) or External Certificate Authority (ECA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiner, Ruth F.; Blink, James A.; Rechard, Robert Paul
This report examines the current policy, legal, and regulatory framework pertaining to used nuclear fuel and high level waste management in the United States. The goal is to identify potential changes that if made could add flexibility and possibly improve the chances of successfully implementing technical aspects of a nuclear waste policy. Experience suggests that the regulatory framework should be established prior to initiating future repository development. Concerning specifics of the regulatory framework, reasonable expectation as the standard of proof was successfully implemented and could be retained in the future; yet, the current classification system for radioactive waste, including hazardousmore » constituents, warrants reexamination. Whether or not consideration of multiple sites are considered simultaneously in the future, inclusion of mechanisms such as deliberate use of performance assessment to manage site characterization would be wise. Because of experience gained here and abroad, diversity of geologic media is not particularly necessary as a criterion in site selection guidelines for multiple sites. Stepwise development of the repository program that includes flexibility also warrants serious consideration. Furthermore, integration of the waste management system from storage, transportation, and disposition, should be examined and would be facilitated by integration of the legal and regulatory framework. Finally, in order to enhance acceptability of future repository development, the national policy should be cognizant of those policy and technical attributes that enhance initial acceptance, and those policy and technical attributes that maintain and broaden credibility.« less
Breytenbach, Amelia; Lourens, Antoinette; Marsh, Susan
2013-04-26
The history of veterinary science in South Africa can only be appreciated, studied, researched and passed on to coming generations if historical sources are readily available. In most countries, material and sources with historical value are often difficult to locate, dispersed over a large area and not part of the conventional book and journal literature. The Faculty of Veterinary Science of the University of Pretoria and its library has access to a large collection of historical sources. The collection consists of photographs, photographic slides, documents, proceedings, posters, audio-visual material, postcards and other memorabilia. Other institutions in the country are also approached if relevant sources are identified in their collections. The University of Pretoria's institutional repository, UPSpace, was launched in 2006. This provided the Jotello F. Soga Library with the opportunity to fill the repository with relevant digitised collections of diverse heritage and learning resources that can contribute to the long-term preservation and accessibility of historical veterinary sources. These collections are available for use not only by historians and researchers in South Africa but also elsewhere in Africa and the rest of the world. Important historical collections such as the Arnold Theiler collection, the Jotello F. Soga collection and collections of the Onderstepoort Journal of Veterinary Research and the Journal of the South African Veterinary Association are highlighted. The benefits of an open access digital repository, the importance of collaboration across the veterinary community and other prerequisites for the sustainability of a digitisation project and the importance of metadata to enhance accessibility are covered.
Tian, Moqian; Grill-Spector, Kalanit
2015-01-01
Recognizing objects is difficult because it requires both linking views of an object that can be different and distinguishing objects with similar appearance. Interestingly, people can learn to recognize objects across views in an unsupervised way, without feedback, just from the natural viewing statistics. However, there is intense debate regarding what information during unsupervised learning is used to link among object views. Specifically, researchers argue whether temporal proximity, motion, or spatiotemporal continuity among object views during unsupervised learning is beneficial. Here, we untangled the role of each of these factors in unsupervised learning of novel three-dimensional (3-D) objects. We found that after unsupervised training with 24 object views spanning a 180° view space, participants showed significant improvement in their ability to recognize 3-D objects across rotation. Surprisingly, there was no advantage to unsupervised learning with spatiotemporal continuity or motion information than training with temporal proximity. However, we discovered that when participants were trained with just a third of the views spanning the same view space, unsupervised learning via spatiotemporal continuity yielded significantly better recognition performance on novel views than learning via temporal proximity. These results suggest that while it is possible to obtain view-invariant recognition just from observing many views of an object presented in temporal proximity, spatiotemporal information enhances performance by producing representations with broader view tuning than learning via temporal association. Our findings have important implications for theories of object recognition and for the development of computational algorithms that learn from examples. PMID:26024454
ERIC Educational Resources Information Center
Chudnov, Daniel
2008-01-01
The author does not know the first thing about building digital repositories. Maybe that is a strange thing to say, given that he works in a repository development group now, worked on the original DSpace project years ago, and worked on a few repository research projects in between. Given how long he has been around people and projects aiming to…
USDA-ARS?s Scientific Manuscript database
The National Clonal Germplasm Repository (NCGR) in Davis is one among the nine repositories in the National Plant Germplasm System, USDA-ARS that is responsible for conservation of clonally propagated woody perennial subtropical and temperate fruit and nut crop germplasm. Currently the repository ho...
Code of Federal Regulations, 2010 CFR
2010-07-01
... repository possesses the capability to provide adequate long-term curatorial services. 79.9 Section 79.9... FEDERALLY-OWNED AND ADMINISTERED ARCHAEOLOGICAL COLLECTIONS § 79.9 Standards to determine when a repository... shall determine that a repository has the capability to provide adequate long-term curatorial services...
10 CFR Appendix II to Part 960 - NRC and EPA Requirements for Preclosure Repository Performance
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false NRC and EPA Requirements for Preclosure Repository... SCREENING OF POTENTIAL SITES FOR A NUCLEAR WASTE REPOSITORY Pt. 960, App. II Appendix II to Part 960—NRC and EPA Requirements for Preclosure Repository Performance Under proposed 40 CFR part 191, subpart A...
Code of Federal Regulations, 2010 CFR
2010-01-01
... license with respect to a geologic repository. 51.109 Section 51.109 Energy NUCLEAR REGULATORY COMMISSION... Public hearings in proceedings for issuance of materials license with respect to a geologic repository... waste repository at a geologic repository operations area under parts 60 and 63 of this chapter, and in...
10 CFR Appendix I to Part 960 - NRC and EPA Requirements for Postclosure Repository Performance
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false NRC and EPA Requirements for Postclosure Repository... SCREENING OF POTENTIAL SITES FOR A NUCLEAR WASTE REPOSITORY Pt. 960, App. I Appendix I to Part 960—NRC and EPA Requirements for Postclosure Repository Performance Under proposed 40 CFR part 191, subpart B...
How to teach medication management: a review of novel educational materials in geriatrics.
Ramaswamy, Ravishankar
2013-09-01
Medication management is an important component of medical education, particularly in the field of geriatrics. The Association of American Medical Colleges has put forth 26 minimum geriatrics competencies under eight domains for graduating medical students; medication management is one of these domains. The Portal of Geriatric Online education (www.POGOe.org) is an online public repository of geriatrics educational materials and modules developed by geriatrics educators and academicians in the United States, freely available for use by educators and learners in the field. The three POGOe materials presented in this review showcase pearls of medication management for medical and other professional students in novel learning formats that can be administered without major prior preparation. The review compares and contrasts the three materials in descriptive and tabular formats to enable its appropriate use by educators in promoting self-learning or group learning among their learners. © 2013, Copyright the Authors Journal compilation © 2013, The American Geriatrics Society.
An automated diagnosis system of liver disease using artificial immune and genetic algorithms.
Liang, Chunlin; Peng, Lingxi
2013-04-01
The rise of health care cost is one of the world's most important problems. Disease prediction is also a vibrant research area. Researchers have approached this problem using various techniques such as support vector machine, artificial neural network, etc. This study typically exploits the immune system's characteristics of learning and memory to solve the problem of liver disease diagnosis. The proposed system applies a combination of two methods of artificial immune and genetic algorithm to diagnose the liver disease. The system architecture is based on artificial immune system. The learning procedure of system adopts genetic algorithm to interfere the evolution of antibody population. The experiments use two benchmark datasets in our study, which are acquired from the famous UCI machine learning repository. The obtained diagnosis accuracies are very promising with regard to the other diagnosis system in the literatures. These results suggest that this system may be a useful automatic diagnosis tool for liver disease.
Huser, Vojtech; Cimino, James J.
2013-01-01
Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network’s Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management. PMID:24551366
Huser, Vojtech; Cimino, James J
2013-01-01
Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network's Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management.
Quarterly Report - May through July 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Laniece E.
2012-08-09
The first quarter of my postgraduate internship has been an extremely varied one, and one which I have tackled several different aspects of the project. Because this is the beginning of a new investigation for the Research Library, I think it is appropriate that I explore data management at LANL from multiple perspectives. I have spent a considerable amount of time doing a literature search and taking notes on what I've been reading in preparation for potential writing activities later. The Research Library is not the only research library exploring the possibility of providing services to their user base. Themore » Joint Information Systems Committee (JISC) and the Digital Curation Centre (DCC) in the UK are actively pursuing possibilities to preserve the scientific record. DataOne is a U.S. National Science Foundation (NSF) initiative aimed at helping to curate bioscience data. This is just a tiny sample of the organizations actively looking into the issues surrounding data management on an organizational, cultural, or technical level. I have included a partial bibliography of some papers I have read. Based on what I read, various discussions, and previous library training, I have begun to document the services I feel I could provide researchers in the context of my internship. This is still very much a work in progress as I learn more about the landscape in libraries and at the Laboratory. I have detailed this process and my thoughts on the issue below. As data management is such a complex and interconnected activity, it is impossible to investigate the organizational and cultural needs of the researchers without familiarizing myself with technologies that could facilitate the local cataloging and preservation of data sets. I have spent some time investigating the repository software DSpace. The library has long maintained the digital object repository aDORe, but the differences in features and lack of a user interface compared to DSpace have made DSpace a good test bed for this project. However my internship is not about repository software and DSpace is just one potential tool for supporting researchers and their data. More details my repository investigation. The most exciting aspect of the project thus far has been meeting with researchers, some of which are potential collaborators. Some people I have talked with have been very interested and enthusiastic about the possibility of collaborating, while others have not wanted to discuss the issue at all. I have had discussions with individual researchers managing their own lab as well as with researchers who are part of much larger collaborations. Three of the research groups whom I feel are of particular interest are detailed below. I have added an appendix below which goes into more detail about the protein crystallography community which has addressed the complete data life cycle within their field end to end. The issue of data management is much bigger than just my internship and there are several people and organizations exploring the issues at the Laboratory. I am making every effort to stay focused on small science data sets and ensure that my activities use standards-based approaches and are sustainable.« less
HLRW management during MR reactor decommissioning in NRC 'Kurchatov Institute'
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chesnokov, Alexander; Ivanov, Oleg; Kolyadin, Vyacheslav
2013-07-01
A program of decommissioning of MR research reactor in the Kurchatov institute started in 2008. The decommissioning work presumed a preliminary stage, which included: removal of spent fuel from near reactor storage; removal of spent fuel assemble of metal liquid loop channel from a core; identification, sorting and disposal of radioactive objects from gateway of the reactor; identification, sorting and disposal of radioactive objects from cells of HLRW storage of the Kurchatov institute for radwaste creating form the decommissioning of MR. All these works were performed by a remote controlled means with use of a remote identification methods of highmore » radioactive objects. A distribution of activity along high radiated objects was measured by a collimated radiometer installed on the robot Brokk-90, a gamma image of the object was registered by gamma-visor. Spectrum of gamma radiation was measured by a gamma locator and semiconductor detector system. For identification of a presence of uranium isotopes in the HLRW a technique, based on the registration of characteristic radiation of U, was developed. For fragmentation of high radiated objects was used a cold cutting technique and dust suppression system was applied for reduction of volume activity of aerosols in air. The management of HLRW was performed by remote controlled robots Brokk-180 and Brokk-330. They executed sorting, cutting and parking of high radiated part of contaminated equipment. The use of these techniques allowed to reduce individual and collective doses of personal performed the decommissioning. The average individual dose of the personnel was 1,9 mSv/year in 2011, and the collective dose is estimated by 0,0605 man x Sv/year. Use of the remote control machines enables reducing the number of working personal (20 men) and doses. X-ray spectrometric methods enable determination of a presence of the U in high radiated objects and special cans and separation of them for further spent fuel inspection. The sorting of radwaste enabled shipping of the LLRW and ILRW to special repositories and keeping of the HLRW for decay in the Kurchatov institute repository. (authors)« less
Long-term Science Data Curation Using a Digital Object Model and Open-Source Frameworks
NASA Astrophysics Data System (ADS)
Pan, J.; Lenhardt, W.; Wilson, B. E.; Palanisamy, G.; Cook, R. B.
2010-12-01
Scientific digital content, including Earth Science observations and model output, has become more heterogeneous in format and more distributed across the Internet. In addition, data and metadata are becoming necessarily linked internally and externally on the Web. As a result, such content has become more difficult for providers to manage and preserve and for users to locate, understand, and consume. Specifically, it is increasingly harder to deliver relevant metadata and data processing lineage information along with the actual content consistently. Readme files, data quality information, production provenance, and other descriptive metadata are often separated in the storage level as well as in the data search and retrieval interfaces available to a user. Critical archival metadata, such as auditing trails and integrity checks, are often even more difficult for users to access, if they exist at all. We investigate the use of several open-source software frameworks to address these challenges. We use Fedora Commons Framework and its digital object abstraction as the repository, Drupal CMS as the user-interface, and the Islandora module as the connector from Drupal to Fedora Repository. With the digital object model, metadata of data description and data provenance can be associated with data content in a formal manner, so are external references and other arbitrary auxiliary information. Changes are formally audited on an object, and digital contents are versioned and have checksums automatically computed. Further, relationships among objects are formally expressed with RDF triples. Data replication, recovery, metadata export are supported with standard protocols, such as OAI-PMH. We provide a tentative comparative analysis of the chosen software stack with the Open Archival Information System (OAIS) reference model, along with our initial results with the existing terrestrial ecology data collections at NASA’s ORNL Distributed Active Archive Center for Biogeochemical Dynamics (ORNL DAAC).
The influence of personality on neural mechanisms of observational fear and reward learning
Hooker, Christine I.; Verosky, Sara C.; Miyakawa, Asako; Knight, Robert T.; D’Esposito, Mark
2012-01-01
Fear and reward learning can occur through direct experience or observation. Both channels can enhance survival or create maladaptive behavior. We used fMRI to isolate neural mechanisms of observational fear and reward learning and investigate whether neural response varied according to individual differences in neuroticism and extraversion. Participants learned object-emotion associations by observing a woman respond with fearful (or neutral) and happy (or neutral) facial expressions to novel objects. The amygdala-hippocampal complex was active when learning the object-fear association, and the hippocampus was active when learning the object-happy association. After learning, objects were presented alone; amygdala activity was greater for the fear (vs. neutral) and happy (vs. neutral) associated object. Importantly, greater amygdala-hippocampal activity during fear (vs. neutral) learning predicted better recognition of learned objects on a subsequent memory test. Furthermore, personality modulated neural mechanisms of learning. Neuroticism positively correlated with neural activity in the amygdala and hippocampus during fear (vs. neutral) learning. Low extraversion/high introversion was related to faster behavioral predictions of the fearful and neutral expressions during fear learning. In addition, low extraversion/high introversion was related to greater amygdala activity during happy (vs. neutral) learning, happy (vs. neutral) object recognition, and faster reaction times for predicting happy and neutral expressions during reward learning. These findings suggest that neuroticism is associated with an increased sensitivity in the neural mechanism for fear learning which leads to enhanced encoding of fear associations, and that low extraversion/high introversion is related to enhanced conditionability for both fear and reward learning. PMID:18573512
Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio
2009-02-01
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.
Abdul Ghaffar Al-Shaibani, Tarik A; Sachs-Robertson, Annette; Al Shazali, Hafiz O; Sequeira, Reginald P; Hamdy, Hosam; Al-Roomi, Khaldoon
2003-07-01
A problem-based learning strategy is used for curriculum planning and implementation at the Arabian Gulf University, Bahrain. Problems are constructed in a way that faculty-set objectives are expected to be identified by students during tutorials. Students in small groups, along with a tutor functioning as a facilitator, identify learning issues and define their learning objectives. We compared objectives identified by student groups with faculty-set objectives to determine extent of congruence, and identified factors that influenced students' ability at identifying faculty-set objectives. Male and female students were segregated and randomly grouped. A faculty tutor was allocated for each group. This study was based on 13 problems given to entry-level medical students. Pooled objectives of these problems were classified into four categories: structural, functional, clinical and psychosocial. Univariate analysis of variance was used for comparison, and a p > 0.05 was considered significant. The mean of overall objectives generated by the students was 54.2%, for each problem. Students identified psychosocial learning objectives more readily than structural ones. Female students identified more psychosocial objectives, whereas male students identified more of structural objectives. Tutor characteristics such as medical/non-medical background, and the years of teaching were correlated with categories of learning issues identified. Students identify part of the faculty-set learning objectives during tutorials with a faculty tutor acting as a facilitator. Students' gender influences types of learning issues identified. Content expertise of tutors does not influence identification of learning needs by students.
Dynamic Learning Objects to Teach Java Programming Language
ERIC Educational Resources Information Center
Narasimhamurthy, Uma; Al Shawkani, Khuloud
2010-01-01
This article describes a model for teaching Java Programming Language through Dynamic Learning Objects. The design of the learning objects was based on effective learning design principles to help students learn the complex topic of Java Programming. Visualization was also used to facilitate the learning of the concepts. (Contains 1 figure and 2…
A Framework for the Flexible Content Packaging of Learning Objects and Learning Designs
ERIC Educational Resources Information Center
Lukasiak, Jason; Agostinho, Shirley; Burnett, Ian; Drury, Gerrard; Goodes, Jason; Bennett, Sue; Lockyer, Lori; Harper, Barry
2004-01-01
This paper presents a platform-independent method for packaging learning objects and learning designs. The method, entitled a Smart Learning Design Framework, is based on the MPEG-21 standard, and uses IEEE Learning Object Metadata (LOM) to provide bibliographic, technical, and pedagogical descriptors for the retrieval and description of learning…
Object Oriented Learning Objects
ERIC Educational Resources Information Center
Morris, Ed
2005-01-01
We apply the object oriented software engineering (OOSE) design methodology for software objects (SOs) to learning objects (LOs). OOSE extends and refines design principles for authoring dynamic reusable LOs. Our learning object class (LOC) is a template from which individualised LOs can be dynamically created for, or by, students. The properties…
Scott, Jonathan L; Moxham, Bernard J; Rutherford, Stephen M
2014-01-01
Teaching and learning in anatomy is undertaken by a variety of methodologies, yet all of these pedagogies benefit from students discussing and reflecting upon their learning activities. An approach of particular potency is peer-mediated learning, through either peer-teaching or collaborative peer-learning. Collaborative, peer-mediated, learning activities help promote deep learning approaches and foster communities of practice in learning. Students generally flourish in collaborative learning settings but there are limitations to the benefits of collaborative learning undertaken solely within the confines of modular curricula. We describe the development of peer-mediated learning through student-focused and student-led study groups we have termed ‘Shadow Modules’. The ‘Shadow Module’ takes place parallel to the formal academically taught module and facilitates collaboration between students to support their learning for that module. In ‘Shadow Module’ activities, students collaborate towards curating existing online open resources as well as developing learning resources of their own to support their study. Through the use of communication technologies and web 2.0 tools these resources are able to be shared with their peers, thus enhancing the learning experience of all students following the module. The Shadow Module activities have the potential to lead to participants feeling a greater sense of engagement with the subject material, as well as improving their study and group-working skills and developing digital literacy. The outputs from Shadow Module collaborative work are open-source and may be utilised by subsequent student cohorts, thus building up a repository of learning resources designed by and for students. Shadow Module activities would benefit all pedagogies in the study of anatomy, and support students moving from being passive consumers to active participants in learning. PMID:24117249
Scott, Jonathan L; Moxham, Bernard J; Rutherford, Stephen M
2014-03-01
Teaching and learning in anatomy is undertaken by a variety of methodologies, yet all of these pedagogies benefit from students discussing and reflecting upon their learning activities. An approach of particular potency is peer-mediated learning, through either peer-teaching or collaborative peer-learning. Collaborative, peer-mediated, learning activities help promote deep learning approaches and foster communities of practice in learning. Students generally flourish in collaborative learning settings but there are limitations to the benefits of collaborative learning undertaken solely within the confines of modular curricula. We describe the development of peer-mediated learning through student-focused and student-led study groups we have termed 'Shadow Modules'. The 'Shadow Module' takes place parallel to the formal academically taught module and facilitates collaboration between students to support their learning for that module. In 'Shadow Module' activities, students collaborate towards curating existing online open resources as well as developing learning resources of their own to support their study. Through the use of communication technologies and Web 2.0 tools these resources are able to be shared with their peers, thus enhancing the learning experience of all students following the module. The Shadow Module activities have the potential to lead to participants feeling a greater sense of engagement with the subject material, as well as improving their study and group-working skills and developing digital literacy. The outputs from Shadow Module collaborative work are open-source and may be utilised by subsequent student cohorts, thus building up a repository of learning resources designed by and for students. Shadow Module activities would benefit all pedagogies in the study of anatomy, and support students moving from being passive consumers to active participants in learning. © 2013 Anatomical Society.
ERIC Educational Resources Information Center
Niemann, Katja; Wolpers, Martin
2015-01-01
In this paper, we introduce a new way of detecting semantic similarities between learning objects by analysing their usage in web portals. Our approach relies on the usage-based relations between the objects themselves rather then on the content of the learning objects or on the relations between users and learning objects. We then take this new…
Mau, Wilfried; Liebl, Max Emanuel; Deck, Ruth; Lange, Uwe; Smolenski, Ulrich Christian; Walter, Susanne; Gutenbrunner, Christoph
2017-12-01
Since the first publication of learning objectives for the interdisciplinary subject "Rehabilitation, Physical Medicine, Naturopathic Treatment" in undergraduate medical education in 2004 a revision is reasonable due to heterogenous teaching programmes in the faculties and the introduction of the National Competence Based Catalogue of Learning Objectives in Medicine as well as the "Masterplan Medical Education 2020". Therefore the German Society of Rehabilitation Science and the German Society of Physical Medicine and Rehabilitation started a structured consensus process using the DELPHI-method to reduce the learning objectives and arrange them more clearly. Objectives of particular significance are emphasised. All learning objectives are assigned to the cognitive and methodological level 1 or to the action level 2. The learning objectives refer to the less detailed National Competence Based Catalogue of Learning Objectives in Medicine. The revised learning objectives will contribute to further progress in competence based and more homogenous medical teaching in core objectives of Rehabilitation, Physical Medicine, and Naturopathic Treatment in the faculties. © Georg Thieme Verlag KG Stuttgart · New York.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-12
... 79765] Public Land Order No. 7742; Withdrawal of Public Land for the Manning Canyon Tailings Repository... period of 5 years to protect the integrity of the Manning Canyon Tailings Repository and surrounding... Repository. The Bureau of Land Management intends to evaluate the need for a lengthier withdrawal through the...
Scaling an expert system data mart: more facilities in real-time.
McNamee, L A; Launsby, B D; Frisse, M E; Lehmann, R; Ebker, K
1998-01-01
Clinical Data Repositories are being rapidly adopted by large healthcare organizations as a method of centralizing and unifying clinical data currently stored in diverse and isolated information systems. Once stored in a clinical data repository, healthcare organizations seek to use this centralized data to store, analyze, interpret, and influence clinical care, quality and outcomes. A recent trend in the repository field has been the adoption of data marts--specialized subsets of enterprise-wide data taken from a larger repository designed specifically to answer highly focused questions. A data mart exploits the data stored in the repository, but can use unique structures or summary statistics generated specifically for an area of study. Thus, data marts benefit from the existence of a repository, are less general than a repository, but provide more effective and efficient support for an enterprise-wide data analysis task. In previous work, we described the use of batch processing for populating data marts directly from legacy systems. In this paper, we describe an architecture that uses both primary data sources and an evolving enterprise-wide clinical data repository to create real-time data sources for a clinical data mart to support highly specialized clinical expert systems.
Schematic designs for penetration seals for a reference repository in bedded salt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelsall, P.C.; Case, J.B.; Meyer, D.
1982-11-01
The isolation of radioactive wastes in geologic repositories requires that man-made penetrations such as shafts, tunnels, or boreholes are adequately sealed. This report describes schematic seal designs for a repository in bedded salt referenced to the straitigraphy of southeastern New Mexico. The designs are presented for extensive peer review and will be updated as site-specific conceptual designs when a site for a repository in salt has been selected. The principal material used in the seal system is crushed salt obtained from excavating the repository. It is anticipated that crushed salt will consolidate as the repository rooms creep close to themore » degree that mechanical and hydrologic properties will eventually match those of undisturbed, intact salt. For southeastern New Mexico salt, analyses indicate that this process will require approximately 1000 years for a seal located at the base of one of the repository shafts (where there is little increase in temperature due to waste emplacement) and approximately 400 years for a seal located in an access tunnel within the repository. Bulkheads composed of contrete or salt bricks are also included in the seal system as components which will have low permeability during the period required for salt consolidation.« less
Cimino, James J; Lancaster, William J; Wyatt, Mathew C
2017-01-01
One of the challenges to using electronic health record (EHR) repositories for research is the difficulty mapping study subject eligibility criteria to the query capabilities of the repository. We sought to characterize criteria as "easy" (searchable in a typical repository), "hard" (requiring manual review of the record data), and "impossible" (not typically available in EHR repositories). We obtained 292 criteria from 20 studies available from Clinical Trials.gov and rated them according to our three types, plus a fourth "mixed" type. We had good agreement among three independent reviewers and chose 274 criteria that were characterized by single types for further analysis. The resulting analysis showed typical features of criteria that do and don't map to repositories. We propose that these features be used to guide researchers in specifying eligibility criteria to improve development of enrollment workflow, including the definition of EHR repository queries for self-service or analyst-mediated retrievals.
Making Research Data Repositories Visible: The re3data.org Registry
Pampel, Heinz; Vierkant, Paul; Scholze, Frank; Bertelmann, Roland; Kindling, Maxi; Klump, Jens; Goebelbecker, Hans-Jürgen; Gundlach, Jens; Schirmbacher, Peter; Dierolf, Uwe
2013-01-01
Researchers require infrastructures that ensure a maximum of accessibility, stability and reliability to facilitate working with and sharing of research data. Such infrastructures are being increasingly summarized under the term Research Data Repositories (RDR). The project re3data.org–Registry of Research Data Repositories–has begun to index research data repositories in 2012 and offers researchers, funding organizations, libraries and publishers an overview of the heterogeneous research data repository landscape. In July 2013 re3data.org lists 400 research data repositories and counting. 288 of these are described in detail using the re3data.org vocabulary. Information icons help researchers to easily identify an adequate repository for the storage and reuse of their data. This article describes the heterogeneous RDR landscape and presents a typology of institutional, disciplinary, multidisciplinary and project-specific RDR. Further the article outlines the features of re3data.org, and shows how this registry helps to identify appropriate repositories for storage and search of research data. PMID:24223762
The Pediatric Imaging, Neurocognition, and Genetics (PING) Data Repository
Jernigan, Terry L.; Brown, Timothy T.; Hagler, Donald J.; Akshoomoff, Natacha; Bartsch, Hauke; Newman, Erik; Thompson, Wesley K.; Bloss, Cinnamon S.; Murray, Sarah S.; Schork, Nicholas; Kennedy, David N.; Kuperman, Joshua M.; McCabe, Connor; Chung, Yoonho; Libiger, Ondrej; Maddox, Melanie; Casey, B. J.; Chang, Linda; Ernst, Thomas M.; Frazier, Jean A.; Gruen, Jeffrey R.; Sowell, Elizabeth R.; Kenet, Tal; Kaufmann, Walter E.; Mostofsky, Stewart; Amaral, David G.; Dale, Anders M.
2015-01-01
The main objective of the multi-site Pediatric Imaging, Neurocognition, and Genetics (PING) study was to create a large repository of standardized measurements of behavioral and imaging phenotypes accompanied by whole genome genotyping acquired from typically-developing children varying widely in age (3 to 20 years). This cross-sectional study produced sharable data from 1493 children, and these data have been described in several publications focusing on brain and cognitive development. Researchers may gain access to these data by applying for an account on the PING Portal and filing a Data Use Agreement. Here we describe the recruiting and screening of the children and give a brief overview of the assessments performed, the imaging methods applied, the genetic data produced, and the numbers of cases for whom different data types are available. We also cite sources of more detailed information about the methods and data. Finally we describe the procedures for accessing the data and for using the PING data exploration portal. PMID:25937488
The Pediatric Imaging, Neurocognition, and Genetics (PING) Data Repository.
Jernigan, Terry L; Brown, Timothy T; Hagler, Donald J; Akshoomoff, Natacha; Bartsch, Hauke; Newman, Erik; Thompson, Wesley K; Bloss, Cinnamon S; Murray, Sarah S; Schork, Nicholas; Kennedy, David N; Kuperman, Joshua M; McCabe, Connor; Chung, Yoonho; Libiger, Ondrej; Maddox, Melanie; Casey, B J; Chang, Linda; Ernst, Thomas M; Frazier, Jean A; Gruen, Jeffrey R; Sowell, Elizabeth R; Kenet, Tal; Kaufmann, Walter E; Mostofsky, Stewart; Amaral, David G; Dale, Anders M
2016-01-01
The main objective of the multi-site Pediatric Imaging, Neurocognition, and Genetics (PING) study was to create a large repository of standardized measurements of behavioral and imaging phenotypes accompanied by whole genome genotyping acquired from typically-developing children varying widely in age (3 to 20 years). This cross-sectional study produced sharable data from 1493 children, and these data have been described in several publications focusing on brain and cognitive development. Researchers may gain access to these data by applying for an account on the PING portal and filing a data use agreement. Here we describe the recruiting and screening of the children and give a brief overview of the assessments performed, the imaging methods applied, the genetic data produced, and the numbers of cases for whom different data types are available. We also cite sources of more detailed information about the methods and data. Finally we describe the procedures for accessing the data and for using the PING data exploration portal. Copyright © 2015 Elsevier Inc. All rights reserved.
Introducing sampling entropy in repository based adaptive umbrella sampling
NASA Astrophysics Data System (ADS)
Zheng, Han; Zhang, Yingkai
2009-12-01
Determining free energy surfaces along chosen reaction coordinates is a common and important task in simulating complex systems. Due to the complexity of energy landscapes and the existence of high barriers, one widely pursued objective to develop efficient simulation methods is to achieve uniform sampling among thermodynamic states of interest. In this work, we have demonstrated sampling entropy (SE) as an excellent indicator for uniform sampling as well as for the convergence of free energy simulations. By introducing SE and the concentration theorem into the biasing-potential-updating scheme, we have further improved the adaptivity, robustness, and applicability of our recently developed repository based adaptive umbrella sampling (RBAUS) approach [H. Zheng and Y. Zhang, J. Chem. Phys. 128, 204106 (2008)]. Besides simulations of one dimensional free energy profiles for various systems, the generality and efficiency of this new RBAUS-SE approach have been further demonstrated by determining two dimensional free energy surfaces for the alanine dipeptide in gas phase as well as in water.
The repository-based software engineering program: Redefining AdaNET as a mainstream NASA source
NASA Technical Reports Server (NTRS)
1993-01-01
The Repository-based Software Engineering Program (RBSE) is described to inform and update senior NASA managers about the program. Background and historical perspective on software reuse and RBSE for NASA managers who may not be familiar with these topics are provided. The paper draws upon and updates information from the RBSE Concept Document, baselined by NASA Headquarters, Johnson Space Center, and the University of Houston - Clear Lake in April 1992. Several of NASA's software problems and what RBSE is now doing to address those problems are described. Also, next steps to be taken to derive greater benefit from this Congressionally-mandated program are provided. The section on next steps describes the need to work closely with other NASA software quality, technology transfer, and reuse activities and focuses on goals and objectives relative to this need. RBSE's role within NASA is addressed; however, there is also the potential for systematic transfer of technology outside of NASA in later stages of the RBSE program. This technology transfer is discussed briefly.
Detection of Serum microRNAs From Department of Defense Serum Repository
Woeller, Collynn F.; Thatcher, Thomas H.; Van Twisk, Daniel; Pollock, Stephen J.; Croasdell, Amanda; Kim, Nina; Hopke, Philip K.; Xia, Xiaoyan; Thakar, Juilee; Mallon, COL Timothy M.; Utell, Mark J.; Phipps, Richard P.
2017-01-01
Objective The aim of this study was to investigate whether serum samples from the Department of Defense Serum Repository (DoDSR) are of sufficient quality to detect microRNAs (miRNAs), cytokines, immunoglobulin E (IgE), and polycyclic aromatic hydrocarbons (PAHs). Methods MiRNAs were isolated and quantified by polymerase chain reaction (PCR) array. Cytokines and chemokines related to inflammation were measured using multiplex immunoassays. Cotinine and IgE were detected by enzyme-linked immunoassay (ELISA) and PAHs were detected by Liquid Chromatography/Mass Spectroscopy. Results We detected miRNAs, cytokines, IgE, and PAHs with high sensitivity. Eleven of 30 samples tested positive for cotinine suggesting tobacco exposure. Significant associations between serum cotinine, cytokine, IgE, PAHs, and miRNA were discovered. Conclusion We successfully quantified over 200 potential biomarkers of occupational exposure from DoDSR samples. The stored serum samples were not affected by hemolysis and represent a powerful tool for biomarker discovery and analysis in retrospective studies. PMID:27501106
Data Publication: The Evolving Lifecyle
NASA Astrophysics Data System (ADS)
Studwell, S.; Elliott, J.; Anderson, A.
2015-12-01
Datasets are recognized as valuable information entities in their own right that, now and in the future, need to be available for citation, discovery, retrieval and reuse. The U.S. Department of Energy's Office of Scientific and Technical Information (OSTI) provides Digital Object Identifiers (DOIs) to DOE-funded data through partnership with DataCite. The Geothermal Data Repository (GDR) has been using OSTI's Data ID Service since summer, 2014 and is a success story for data publishing in several different ways. This presentation attributes the initial success to the insistence of DOE's Geothermal Technologies Office on detailed planning, robust data curation, and submitter participation. OSTI widely disseminates these data products across both U.S. and international platforms and continually enhances the Data ID Service to facilitate better linkage between published literature, supplementary data components, and the underlying datasets within the structure of the GDR repository. Issues of granularity in DOI assignment, the role of new federal government guidelines on public access to digital data, and the challenges still ahead will be addressed.
McDonald, Sandra A; Velasco, Elizabeth; Ilasi, Nicholas T
2010-12-01
Pfizer, Inc.'s Tissue Bank, in conjunction with Pfizer's BioBank (biofluid repository), endeavored to create an overarching internal software package to cover all general functions of both research facilities, including sample receipt, reconciliation, processing, storage, and ordering. Business process flow diagrams were developed by the Tissue Bank and Informatics teams as a way of characterizing best practices both within the Bank and in its interactions with key internal and external stakeholders. Besides serving as a first step for the software development, such formalized process maps greatly assisted the identification and communication of best practices and the optimization of current procedures. The diagrams shared here could assist other biospecimen research repositories (both pharmaceutical and other settings) for comparative purposes or as a guide to successful informatics design. Therefore, it is recommended that biorepositories consider establishing formalized business process flow diagrams for their laboratories, to address these objectives of communication and strategy.
NASA Technical Reports Server (NTRS)
Hanley, Lionel
1989-01-01
The Ada Software Repository is a public-domain collection of Ada software and information. The Ada Software Repository is one of several repositories located on the SIMTEL20 Defense Data Network host computer at White Sands Missile Range, and available to any host computer on the network since 26 November 1984. This repository provides a free source for Ada programs and information. The Ada Software Repository is divided into several subdirectories. These directories are organized by topic, and their names and a brief overview of their topics are contained. The Ada Software Repository on SIMTEL20 serves two basic roles: to promote the exchange and use (reusability) of Ada programs and tools (including components) and to promote Ada education.
Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm
Wu, Haizhou; Luo, Qifang
2016-01-01
Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared. PMID:28105044
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogue, F.; Binnall, E.P.
1982-10-01
Reliable instrumentation will be needed to monitor the performance of future high-level waste repository sites. A study has been made to assess instrument reliability at Department of Energy (DOE) waste repository related experiments. Though the study covers a wide variety of instrumentation, this paper concentrates on experiences with geotechnical instrumentation in hostile repository-type environments. Manufacturers have made some changes to improve the reliability of instruments for repositories. This paper reviews the failure modes, rates, and mechanisms, along with manufacturer modifications and recommendations for additional improvements to enhance instrument performance. 4 tables.
Extended Relation Metadata for SCORM-Based Learning Content Management Systems
ERIC Educational Resources Information Center
Lu, Eric Jui-Lin; Horng, Gwoboa; Yu, Chia-Ssu; Chou, Ling-Ying
2010-01-01
To increase the interoperability and reusability of learning objects, Advanced Distributed Learning Initiative developed a model called Content Aggregation Model (CAM) to describe learning objects and express relationships between learning objects. However, the suggested relations defined in the CAM can only describe structure-oriented…
Age-related impairments in active learning and strategic visual exploration.
Brandstatt, Kelly L; Voss, Joel L
2014-01-01
Old age could impair memory by disrupting learning strategies used by younger individuals. We tested this possibility by manipulating the ability to use visual-exploration strategies during learning. Subjects controlled visual exploration during active learning, thus permitting the use of strategies, whereas strategies were limited during passive learning via predetermined exploration patterns. Performance on tests of object recognition and object-location recall was matched for younger and older subjects for objects studied passively, when learning strategies were restricted. Active learning improved object recognition similarly for younger and older subjects. However, active learning improved object-location recall for younger subjects, but not older subjects. Exploration patterns were used to identify a learning strategy involving repeat viewing. Older subjects used this strategy less frequently and it provided less memory benefit compared to younger subjects. In previous experiments, we linked hippocampal-prefrontal co-activation to improvements in object-location recall from active learning and to the exploration strategy. Collectively, these findings suggest that age-related memory problems result partly from impaired strategies during learning, potentially due to reduced hippocampal-prefrontal co-engagement.
Perceptual Learning and Attention: Reduction of Object Attention Limitations with Practice
Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin
2012-01-01
Perceptual learning has widely been claimed to be attention driven; attention assists in choosing the relevant sensory information and attention may be necessary in many cases for learning. In this paper, we focus on the interaction of perceptual learning and attention – that perceptual learning can reduce or eliminate the limitations of attention, or, correspondingly, that perceptual learning depends on the attention condition. Object attention is a robust limit on performance. Two attributes of a single attended object may be reported without loss, while the same two attributes of different objects can exhibit a substantial dual-report deficit due to the sharing of attention between objects. The current experiments document that this fundamental dual-object report deficit can be reduced, or eliminated, through perceptual learning that is partially specific to retinal location. This suggests that alternative routes established by practice may reduce the competition between objects for processing resources. PMID:19796653
NASA Astrophysics Data System (ADS)
Dunagan, S. C.; Herrick, C. G.; Lee, M. Y.
2008-12-01
The Waste Isolation Pilot Plant (WIPP) is located at a depth of 655 m in bedded salt in southeastern New Mexico and is operated by the U.S. Department of Energy as a deep underground disposal facility for transuranic (TRU) waste. The WIPP must comply with the EPA's environmental regulations that require a probabilistic risk analysis of releases of radionuclides due to inadvertent human intrusion into the repository at some time during the 10,000-year regulatory period. Sandia National Laboratories conducts performance assessments (PAs) of the WIPP using a system of computer codes representing the evolution of underground repository and emplaced TRU waste in order to demonstrate compliance. One of the important features modeled in a PA is the disturbed rock zone (DRZ) surrounding the emplacement rooms in the repository. The extent and permeability of DRZ play a significant role in the potential radionuclide release scenarios. We evaluated the phenomena occurring in the repository that affect the DRZ and their potential effects on the extent and permeability of the DRZ. Furthermore, we examined the DRZ's role in determining the performance of the repository. Pressure in the completely sealed repository will be increased by creep closure of the salt and degradation of TRU waste contents by microbial activity in the repository. An increased pressure in the repository will reduce the extent and permeability of the DRZ. The reduced DRZ extent and permeability will decrease the amount of brine that is available to interact with the waste. Furthermore, the potential for radionuclide release from the repository is dependent on the amount of brine that enters the repository. As a result of these coupled biological-geomechanical-geochemical phenomena, the extent and permeability of the DRZ has a significant impact on the potential radionuclide releases from the repository and, in turn, the repository performance. Sandia is a multi program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000. This research is funded by WIPP programs administered by the Office of Environmental Management (EM) of the U.S. Department of Energy.
Learning while Babbling: Prelinguistic Object-Directed Vocalizations Indicate a Readiness to Learn
ERIC Educational Resources Information Center
Goldstein, Michael H.; Schwade, Jennifer; Briesch, Jacquelyn; Syal, Supriya
2010-01-01
Two studies illustrate the functional significance of a new category of prelinguistic vocalizing--object-directed vocalizations (ODVs)--and show that these sounds are connected to learning about words and objects. Experiment 1 tested 12-month-old infants' perceptual learning of objects that elicited ODVs. Fourteen infants' vocalizations were…